Next Article in Journal
Classification of Partial Discharge Sources in Ultra-High Frequency Using Signal Conditioning Circuit Phase-Resolved Partial Discharges and Machine Learning
Previous Article in Journal
An Approach to Deepfake Video Detection Based on ACO-PSO Features and Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Validation Scores to Evaluate the Detection Capability of Sensor Systems Used for Autonomous Machines in Outdoor Environments

1
Faculty of Engineering and Computer Science, University of Applied Sciences Osnabrück, 49076 Osnabrück, Germany
2
SICK AG, 79183 Waldkirch, Germany
*
Author to whom correspondence should be addressed.
Submission received: 15 May 2024 / Revised: 8 June 2024 / Accepted: 17 June 2024 / Published: 19 June 2024
(This article belongs to the Special Issue Intelligent Sensor Systems Applied in Smart Agriculture)

Abstract

:
The characterization of the detection capability assumes significance when the reliable monitoring of the region of interest by a non-contact sensor is a safety-relevant function. This paper introduces new validation scores that evaluate the detection capability of non-contact sensors intended to be applied to outdoor machines. The scores quantify, in terms of safety, the suitability of the sensor for the intended implementation in an environmental perception system of (highly) automated machines. This was achieved by developing an extension to the new Real Environment Detection Area (REDA) method and linking the methodology with the sensor standard IEC/TS 62998-1. The extension includes point-by-point and statistic-based error evaluation which leads to the Usability-Score, Availability-Score, and Reliability-Score. By applying the principle in the agricultural sector using ISO 18497 and linking this with data from a real outdoor test stand, it was possible to show that the validation scores offer a generic approach to quantify the detection capability and express this in a machine manufacturer-oriented manner. The findings of this study have significant implications for the advancement of safety-related sensor systems integrated into machines operating in complex environments. In order to achieve full implementation, it is necessary to define in the standards which score is required for each Performance Level (PL).

1. Introduction

The detection capability of non-contact sensors is a key feature that enables highly automated work machines to be brought to market. This research deals with the elaboration of validation scores to evaluate the detection capability of non-contact sensors. The need for the description of the detection capability emerged with the advent of autonomous work machines, where reliable environmental perception plays an enabling role [1,2]. Since a human being no longer performs it, the environment perception system is responsible for monitoring the machines’ surroundings for the absence of safety-related objects [3]. Its reliable performance is crucial for ensuring the safety of both the living beings in its surroundings and the machine itself [4]. It is, therefore, essential that an unmanned vehicle is capable of detecting the presence of any individuals—such as people—in its close vicinity. From a standardization perspective, this is already stated in ISO 18497, i.e., for an automated agricultural machine, machine-specific hazard and warning zones have to be defined [5]. This makes the environment perception a safety-related function for which safe execution must be demonstrated. A key problem that arises in the outdoor domain is shown in Figure 1: the environment of an outdoor working machine can be harsh and, therefore, challenge the measuring principle or mislead the processing unit of the sensor that is used for environment perception on the machine. Whether a perception system’s detection result is or is not affected by environmental influences such as fog, dust, etc. strongly depends on the system’s detection capability, making the detection capability a safety-related characteristic.
Although the terms safety and reliability are both typical for functional safety, functional safety is not the key problem when it comes to the assurance of reliable environment perception for autonomous machines. As an illustrative example, a straightforward and safe approach for the machine depicted in Figure 1 would be to apply laser scanners to the machine that meet the required Performance Level (PL). In the event that the sensors measure an object within the designated hazard zone, the machine is placed in a safe state, which in this case is to cease operation. From a safety perspective, this would appear to meet the requisite standards. However, the measured detection can also be triggered by dust or vegetation here. This example, thus, describes the two relevant properties that must be weighed against each other in order to assess the detection capability of a sensor that is to be attached to a mobile machine outdoors. A sensor with very high reliability in terms of detection capability will probably result in lower availability in return. Given that the machine’s environment perception is safety-related, the system will probably overrule every other system on the machine, meaning that the reliability and availability of the sensor’s detection capability will directly be related to the machine’s reliability and availability. In Figure 2, it is shown that the classical functional safety approach is predicated on the prevention of systematic and random errors in hardware and software during machine operation on the machine. How these requirements have to be met is well defined and can also be expressed by requiring a Performance Level (PL) or an Agricultural Performance Level (AgPL) [5,6,7]. However, a living being getting close to the machine occurs outside of the machine, making this a safety problem that, following the classical safety approaches, must not be addressed by the prevention measures on the machine. Nevertheless, misinterpretation of perception data due to environmental influences can lead to misperformance of the intended function and create a safety-critical situation [8,9,10]. The assurance of sensing units to reliably perceive such a situation is not faced by standards, as shown in Figure 2. No standard offers a way to give assurance that the sensing unit is reliably perceiving all of the relevant parameters of the safety-related object despite the environmental conditions. But for safety, this assurance is fundamental, which means that it has to be proven that the perception system chosen for the machine is capable of the task of detecting the identified worst-case parameter of the safety-related object.
In the automotive sector, the Safety Of The Intended Functionality (SOTIF [11]) addresses safety measures for those safety-critical situations. The environment perception is a concrete example of this and the problem of missing regulations arises not just in the agricultural sector. Tiusanen et al. highlighted this gap in regulations for autonomous work machines [8]. But unlike ISO 21815 ‘Earth-moving machinery-Collision warning and avoidance’ [12], there is no regulation initiative in the agricultural sector for the assurance of the reliable environment perception of autonomous machines. This is contradictory, as ISO 18497 requires reliable perception, but does not provide concrete solutions or metrics for assurance. Such a need for safety regulations is also highlighted by a member survey conducted by the VDI in 2022, in which 40% of respondents rated the urgency of the need with 10 out of 10 points [13]. With the safety relevance comes the need to make the reliability and availability of the detection capability measurable, and therefore, assessable. This led to the development of the new method REDA (Real Environment Detection Area) in previous work. This marked one of the first approaches to measure, visualize, and describe the detection capability in static and dynamic test situations for any sensor system. The evaluation expresses the detection capability based on the sensor’s Region Of Interest (ROI), in direct relation to the environmental conditions by introducing the metric of the REDA-Score. The test method in combination with the metric offered the possibility to compare and evaluate sensor technologies, sensor configurations, and evaluation algorithms, and therefore, offered a new level of detail for the evaluation of the detection capability [14].
Due to the above-stated lack of validation from the perspective of the autonomous machine, Meltebrink presented the Top-Down approach to proving a sensor’s suitability for the machine from the sensor perspective, using the new sensor standard IEC/TS 62998-1 [15]. The proposal was based on the possibility offered by the standard to carry out the design, integration, and validation of safety-related sensors (SRS) with special consideration of systematic capabilities. The top-down approach claimed to offer the use of validation data from the REDA methodology, providing validation of a machine using proven detection capability as the key argument [16]. This approach contrasts with the conventional design process of safety-related systems, which exclusively rely on the deployment of safety sensors to fulfill the requisite safety requirements. The applicability of this concept had yet to be proven and it needed to be clarified whether the REDA method and the resulting data were compatible with IEC/TS 62998-1 and, more importantly for the agricultural sector, whether they provided a solution for reliable monitoring of hazard and warning zones as required by ISO 18497.
From a practical point of view, the application of this proposed approach, with particular ambitions to elaborate a validation scoring that describes a sensor detection capability in terms relevant to safety-related characteristics of autonomous machines, is of high interest from both perspectives. Firstly, an autonomous machine manufacturer who is primarily interested in the safe and reliable process performance of the developed autonomous machine, but is aware of the environmental influences that may conflict with the process. Therefore, the main interest is to identify the sensor that performs well in the machine-specific environment. Secondly, from the sensor manufacturer’s point of view, it is of particular interest to identify the limitations of the sensor system with respect to environment perception in outdoor areas, and therefore, a way is needed to express the sensor’s detection capability in relation to the possible environmental influences. Furthermore, the performance evaluation enables comparison of the effect of sensor configuration changes or quantify evaluation algorithms. Therefore, a quantification of the detection capability is required, which enables a statement to be made about the safety suitability of a sensor for an autonomous working machine. Such safety scores should provide the autonomous machine designer with an at-a-glance view of how a sensor’s detection capability will perform under the environmental conditions in which the machine will be used. This type of score could significantly shorten the design process of the non-contact sensor setup that is intended for the safe environment perception of an autonomous machine.

Objectives of the Work

The objective of this work is to elaborate and introduce new safety-related validation scores that evaluate the detection capability of sensor systems intended for application on (highly) automated outdoor machines. The main goal is to provide a scoring system that expresses the usability of a sensor system for mobile outdoor applications in a safety-related way. The identified safety-related characteristics are reliability and availability. This article shows how the methodology REDA allows the generic evaluation and quantification of the detection capability and that this principle complies with the new sensor standard IEC/TS 62998-1. This connection enables the practical usage of the elaborated scores for the detection capability of a sensor in a manner that is pertinent to safety. Such an expression is needed in safety certifications of autonomous machines, and thus, it acts as the main pillar in the introduction of autonomous machines.
The objective of this paper is to facilitate collaboration between sensor manufacturers and machine manufacturers in the development of environmental perception systems for autonomous machines. It should be noted that the assumptions made and formalisms proposed in this paper may be difficult to grasp without background knowledge. In order to enhance the clarity of the methodology in these cases, the application example illustrated in Figure 1 is utilized. For the tractor depicted, sensors are to be identified that are suitable for use in an assistance system designed to indicate the presence of people in the vicinity of the machine.

2. An Overview of Environment Perception for Agricultural Machinery

Since proofing, validating, and visualizing the detection capability is challenging, in this chapter, an overview is given. The existing standards, with particular relevance to IEC/TS 62998-1, will be presented, as will the existing approaches to evaluating detection capability, which will lead to the methodology REDA.

2.1. Standardization

The first category of the literature to be considered is that of the common standards. In this first instance, the machine and sensor standards will be the focus of attention.

2.1.1. Machine-Related Standards

With regard to the agricultural machine standards, the first relevant is ISO 18497 ‘Agricultural machinery and tractors—Safety of highly automated agricultural machines—Principles for design’, which is in a revision phase and a new version can be expected soon [5]. The demand for environment perception is the assured monitoring of the machine’s warning and hazard zones. A schematic representation of the principle is provided in Figure 3. This implies that the machine designer must consider the potential points of entry for safety-related objects into the machine’s surrounding environment. As these entries are possible from several directions in a three-dimensional space, the characteristics of the zones are machine-specific. As illustrated in the figure, the bird’s-eye perspective allows for a comparison of these zones, despite their individual three-dimensional properties. It is stated that it must be ensured that safety-related objects (e.g., persons) are outside the hazard zone. Therefore, the zones must be monitored, and audible or visual alarms must be triggered when a safety-related object enters the warning zone. Finally, if the precautions prove ineffective, the machine must be brought to a safe state before the safety-related object enters the hazard zone. From the perspective of machine standards, even regarding functional safety standards such as ISO 25119 [6] and ISO 13849 [7], regulations for the assurance of the reliable monitoring of these zones despite the environmental influences are non-existent. This is due to the need for proof of the detection capability as emphasized in the Introduction with Figure 2.

2.1.2. Sensor-Related Standards

Regarding the problem from a sensor’s perspective, there is a way to provide validation by demonstrating the environment perception systems detection capability despite the environmental challenges. For optical sensors (e.g., safety laser scanners, safety light grids, etc.) intended to be used on indoor machinery, IEC 61496 specifies requirements for the detection capability of these sensors [17]. However, the key challenge when it comes to outdoor environment perception is to ensure the detection capability despite the harsh environmental conditions, in which the machine is likely to operate. The solution proposed by the authors is IEC/TS 62998-1 [15], which allows the evaluation of the systematic suitability of safety-related sensors for a target application where the product-specific sensor standards do not specify all requirements or where no application standard exists. The standard therefore gives requirements for the design, integration, and validation of safety-related sensors (SRS) and safety-related sensor systems (SRSS) used for the protection of persons with special attention to systematic capabilities. This is made possible by the standard’s definition of SRS/SRSS performance classes. These can be mapped to the safety levels of the various industry-specific standards. The need to achieve an SRS/SRSS performance class comes if a sensor is integrated into a safety-related control system that performs a safety-related function, such as monitoring the surroundings of a machine for the absence of safety-related objects. The achieved performance class reflects the safety level of the safety function performed by the sensor or system. Following IEC/TS 62998-1, the prerequisite is that the environmental influences and tests for indoor and outdoor use that influence the sensor’s function and the detection capability’s reliability are defined specifically for the autonomous work machine. Figure 4 shows the architecture of an SRS defined according to IEC/TS 62998-1 [15].
Referring back to the example application, where the environment perception system should be able to detect a person in a dusty environment, the sensors used in the perception system are safety-related. Therefore, its detection capability must be proven to be unaffected by environmental interferences, such as dust. As previously stated in the introductory section, an interfered detection capability can manifest itself in two distinct ways: on the one hand, as reduced reliability (e.g., a camera system is not capable of perceiving the shape of the person due to the dust); on the other hand, as low availability (e.g., a laser scanner is perceiving too many detections due to the dust particles). Finally, Figure 4 illustrates that the evaluation of detection capability is a signal-to-noise ratio evaluation. This is because a statement is to be made regarding the suitability of the chosen sensor system for the purpose of accurately perceiving the worst-case properties of the safety-related object despite the environmental influences.

2.2. Approaches to Evaluate the Detection Capability

The ability of an autonomous machine to make the right decisions in its environment cannot be derived from a single test situation. It is, therefore, necessary to evaluate existing methods and test approaches and to find a way to link existing test stands and the associated information they can provide. Hoss et. al. highlighted the challenges in the realization of safety-oriented perception tests and proposes to harmonize the contributions from the primary literature further and to develop new test methods to demonstrate the reliability of perception subsystems for vehicles with autonomy level 4 [18]. This means a validation framework is needed that provides a logical structure to the different approaches. As described in the previous chapter, IEC/TS 62998-1 provides the ability to validate a safety-related sensor setup in combination with the agricultural machine. Previous approaches to evaluate the detection capability will be reviewed below. It is noteworthy that the methodologies reviewed in this study address disparate system levels. This is in accordance with the two identified approaches by Stellet et al. Firstly, the reference perception must be evaluated. Secondly, the perception subsystem should be tested [19].

2.2.1. Machine-Related Evaluation

For example, the Agricultural Robotic Performance Assessment Tests (ARPA) developed by AgroTechnoPôle propose test examples that aim to address the standardized, on-machine validation to evaluate several safety functions and other task performance [20]. At a different level, but with the same validation objective, approaches are initially involved to determine the limits of the sensor system in the outdoor area, independent of the automated machine. For example, the approach of Zhang et al. provides a holistic overview of state-of-the-art algorithms, deep learning methods, and sensor fusion solutions to advance research into the environmental perception of (highly) automated driving in adverse weather conditions [21].

2.2.2. Sensor-Related Evaluation

A more sensor system-related approach is a proposed iteration process for designing a functionally safe and reliable perception system by developing parameters such as pixel density and a proposed criterion called ’perception density’ for a static test setup. This should help to standardize the technical requirements and thus enable comparison between perception systems [10]. A further step towards the isolated consideration of sensor performance is described with the MOT metric (Minimum Object-detectable Transmittance). The metric is proposed for evaluating the object detection performance of non-contact safety-related sensors in low-visibility environments, thus providing valuable insights for improving the safety of outdoor machinery [22]. This leads to the new REDA (Real Environment Detection Area) method, which aims to provide a generic evaluation of the strengths and weaknesses of sensor systems used in harsh environments and offers a validation solution on several system levels [14].

2.3. Methodology of Real Environment Detection Area (REDA)

The REDA method was developed for the evaluation of data from an outdoor test stand that was located in the real environment of agriculture [9]. The original objective of the evaluation conducted on the test stand using the aforementioned method was to validate an environment detection system for a specific autonomous agricultural machine. Nevertheless, it was demonstrated that the method could be utilized as a general framework for evaluating the detection capabilities of sensors in the presence of environmental influences [14]. Therefore, the method provides a basis for the introduction of a validation scoring metric for environmental perception systems of autonomous machines. Although the evaluation principle and the test stand measuring principle have already been published and explained in earlier literature, some principles are repeated below to enhance the clarity [9,14,16,23].

2.3.1. REDA Terminology

In order to provide a clarification of the REDA method, it is necessary to provide an explanation of two of the principal abbreviations employed in the REDA method and to map them to the expressions previously mentioned. These are the Object Detection System (ODS) and the Specified Detection Area (SDA).
  • Object Detection Systems (ODS)
In terms of the REDA methodology, the sensor system under test is referred to in general as an Object Detection System (ODS). An ODS fits with the definition of the safety-related sensor (SRS) description in Figure 4 Section 2.1. There it has been stated that an SRS is composed of the sensing unit(s), the processing unit, and the input/output unit. This is a reiteration of the claim made by the REDA method regarding the evaluation of the detection capability of the sensor system as a whole, without consideration of the specific sensing units employed for detection and without consideration of the algorithms or neural networks utilized in the processing unit. This implies that the ODS/SRS is regarded as a black box whose detection capability is to be quantified and assessed. This still offers the possibility to give a performance statement for the sensing unit or the processing unit individually. For example, O D S A is a combination of S e n s i n g U n i t A and P r o c e s s i n g U n i t A , while O D S B is a combination of S e n s i n g U n i t A and P r o c e s s i n g U n i t B . If O D S A and O D S B are subjected to parallel testing for their detection capabilities, the respective algorithms’ performance can be compared.
  • Specified Detection Area (SDA)
The Specified Detection Area (SDA) is the sensor-related area in which the detection capability is under test and the area that will be monitored if the sensor is used on the machine. This area enables the comparison of different sensor technologies and expresses suitability for the machine due to its characteristic of being sensor-specific but compliant with the requirements of ISO 18497. This is accomplished by quantifying the sensor’s detection capability in the SDA, which is of interest to the manufacturer. This enables them to identify which areas the sensor is capable of reliably monitoring. To explain how the SDA links the detection inside its boundaries with the demand of ISO 18497, the principle of the top view shown in Figure 3 is referred to and exemplified in Figure 5.
As illustrated in the figure, regardless of the three-dimensional characteristics of the hazard and warning zone, the top view allows for these zones’ comparison. The warning zone and hazard zone likely cover a 360° area around the autonomous machine, since in the case of a mobile robot in an open field, the safety-related object can approach from all possible sides. The machine designer is responsible for selecting suitable sensors to complete the 360° environmental perception of the machine. This task can be simplified by setting the detection capability in relation to the SDA and providing a statement about the reliable coverage of the hazard or warning zone. The principle is illustrated in Figure 5, which is a section of the schematic top view from Figure 3. Because the perspective allows for the comparison of different, potentially three-dimensional ROIs of the ODS. From this same aerial perspective, the Real Environment Detection Area can be described as an Area where a sensor system can detect safety-related objects under real environment conditions. To illustrate the relevant claim that has to be evaluated for the SDA, the probable use of the machine in the real environment is referred to. Using the schematic top view from Figure 3 in Section 2.1, the machine and its hazard and warning zone are permanently moved through the working area. If a safety-related object is moving in the warning zone or if the warning zone is moved over the object there will always be a first entry at the edge of the warning zone. This first entry has to be evaluated for the SDA. For instance, in Figure 5, regardless of whether the machine or the object is in motion, the evaluation that has to be made is at which time after the first entry of the SDA was the ODS capable of first perceiving the object.

2.3.2. Concept of REDA and REDAM

As previously explained, the REDA method allows the evaluation of the detection capability inside the SDA. In this chapter, the evaluation concept will be explained. The measurements permit evaluations at two system levels. The REDA (Real Environment Detection Area) evaluates the detection capability of a sensor and the REDAM (Real Environment Detection Area Matrix) is relevant for sensor fusion. Both are introduced and explained below. Further description and explanation of the test setup and the method have been published in previous work [9,14].
  • Real Environment Detection Area (REDA)
When the SDA was explained in Section 2.3.1, it was stated that the entries in the SDA of safety-related objects have to be evaluated. This is what has been performed on the Agro-Safety test stand, where a movable sensor carrier and a movable test object are mounted on a two-axis portal system. This test setup allowed to permanently move the test object in and out of the SDA [9]. Every 3 ms , properties such as location, speed, and acceleration of both are tracked and saved to a database. Furthermore, the sensor vector incorporates the recorded detection signal. The design of the test scenario movements led to the result that during a measurement up to 25,000 vectors can be recorded. If these vectors are derived after the relative position and plotted in a matrix, the pattern that is visualized in Figure 6a can be visualized. The data points appear as lines due to the high recording rate of the test stand. For all of these points, the vector data such as velocity or acceleration is still applicable, the different moving directions are symbolically represented by the arrows in Figure 6a. As the test stand was fenced in, it was assured that during each measurement the only object that could be detected was the test object. This led to the visualization of the Real Environment Detection Area (REDA), which is a set of adjacent points signaled by the sensor as detection and represented as the yellow area in the figure. Due to the assurance that only the test object was moved through the SDA during the measurement, the group of detections has to be the sensor’s Real Environment Detection Area. If a measurement shows anomalies, the only reason could be the environmental influence, as the location of the test stand was in an outdoor area next to a farm. Therefore, it was possible to capture the real occurring environmental influences of the agricultural sector and map these influences to the anomalies. In Figure 6b, the resulting REDA is compared to the SDA; both visualizations offer different conclusions and evaluations. As shown in Figure 6b, the contrast to the resulting REDA in comparison to the set SDA of an ODS can be highlighted. A REDA, therefore, describes an area where it is validated that the sensor can detect the safety-related object despite the environmental conditions that occurred during a measurement. Those environmental influences need to be measured in an appropriate measure as Meltebrink et al. did for the outdoor test stand where the REDA method was first applied [9].
  • Real Environment Detection Area Matrix (REDAM)
However, if a sensor technology experiences issues with environmental noise that may arise in the machine’s surroundings, the autonomous machine developer must also consider sensor fusion. In this case, there are two options: complementary and redundant sensor fusion. Complementary fusion combines different pieces of information from different sensors (for example, a camera with a RADAR) to determine the distance between the obstacle and the machine and to classify the object using the camera image. Redundant sensor fusion uses the same information from different sensors. (e.g., LiDAR and RADAR distance information) [24]. Regardless of the sensor variant chosen (single, redundant, or complementary sensor fusion), the REDA method offers a solution for an easier selection of the best sensor fusion based on the evaluated detection capability, which is called REDAM (Real Environment Detection Area Matrix) [9,14]. The principle is shown in Figure 7; in Figure 7a, there are five REDA for five different environment conditions layered above each other. The top view in Figure 7b provides a quick overview to identify the environmental condition limiting the monitoring of the set ROI. For this example, an ODS must also be found that can monitor the safety-related object despite the identified environmental influences.

2.3.3. REDA-Scores

The REDA method presented above is therefore the first way to make the detection capability of an ODS measurable and visible; additionally, any ODS can be compared with each other and the comparison can be independent of the sensor technology. Following the weather classes proposed by the REDA methodology, it is possible to quantify the detection capability as a function of specific environment influences [14]. To evaluate the detection capability in figures, Meltebrink et al. proposed the REDA-Scores, which make values such as area, perimeter, or compactness of a REDA quantifiable, and thus, comparable for a specific environment condition [14]. The origin of the scores lies in the evaluation of multiple measurements that must be made when an ODS’s detection capability is tested more than a thousand times over the year. Therefore, the large number of measurements does not make it possible to tell at a glance which measurement corresponds to the average detection capability of the ODS and which measurement represents poor performance. The objective was to facilitate the identification of outliners in this big data by establishing a scoring system.
  • Detection Scores
Furthermore, the REDA measurement offered an evaluation of each classification by Meltebrink et. al., which was called the detection scores and provided the evaluation of each measurement point in a REDA measurement [14]. The classification used for the detection score is based on the confusion matrix which is comprised of two dimensions: the actual outcome and the prediction. The outcome was the measured REDA point cloud as each point in the cloud represents the actual state of the sensor, while the SDA represented the prediction where the test object was expected to be detected. This led to four possible classifications of a point in the REDA point cloud depending on the position inside or outside the SDA:
  • Expected detection;
  • Unexpected detection;
  • Unexpected non-detection;
  • Expected non-detection.
With this approach of the detection scores, the relation to reliability and availability can first be pointed out. The unexpected detections lead to reduced availability and the unexpected non-detection regarding the reliability of the sensor. The application of the confusion matrix for this type of classification problem has previously been proven useful in evaluating the performance of autonomous machines [25], and furthermore, has been used for a proposal of a safety metric regarding safe environment perception in autonomous driving [26]. In particular, the final proposed approach is highly promising and can play a pivotal role in the continued evaluation of the detection score. The proposal defines the ground truth for the intended perception task. In the current state of the detection scores, the ground truth is the SDA. However, the SDA does not consider any of the sensor configurations or test stand parameters, and therefore, does not represent the real environment ground truth.

3. Detection Capability in Agriculture Environment

For an environment perception system that is used for an autonomous machine in outdoor areas, the detection capability is an important quality feature to express the system’s reliability. The need for an assessment of the detection capability is a relatively recent one, as it has only become a safety concern with the advent of autonomous systems. For this purpose, an overview of detection capability is given, starting with the challenges that arise for environment perception when it has to be performed on mobile outdoor machines. By introducing the terms detection goal, environment noise, and detection information, a structured description of the aspects to be considered for detection capability is given. Subsequently, the necessity of a parameter-based decomposition of the environment is emphasized in order to allow the evaluation of the detection capability.

3.1. Challenges for the Detection Capability in Agricultural Environments

The environmental constellation of a mobile machine used outdoors is a major challenge for an environment perception system used to monitor it. This is because three problems are combined, each of which poses a challenge to the detection capability of a sensor applied to this machine. The extent to which each of these issues challenges a sensor’s detection capability is briefly explained below.

3.1.1. High Variance of Possible Objects

Ensured detection of safety-related objects in the surroundings of a highly automated machine is the key task for reliable environment perception. However, the potential objects must be defined for the perception system to be able to investigate their presence in the zone to be monitored. As the number of possible objects increases, so does the challenge of ensuring that they are all perceived. In agricultural applications, the range of possible objects is wider than in many industrial applications, as many of the machines are used in semi-public or public areas. This increases the risk of children or other unqualified persons approaching the machine and being exposed to potential hazards because there is no way to control access [8,23].

3.1.2. Constantly Changing Environment and Concealments

Many applications of sensors for outdoor environment perception have been limited to static applications on stationary machines, where the environmental characteristics of the monitored area do not change and objects moving into the area can be reliably detected because they represent an anomaly within their previously free ROI. For autonomous agricultural machines, it cannot be excluded that the machine itself is mobile, which means that the sensor and its ROI move through the environment to be perceived, and the contents within the ROI are therefore constantly changing. This is a major challenge and requires some interpretation of the data to avoid forcing the machine to stop every time an object is detected, even if it is just a tree that could easily be driven by it. In addition, the mobility of the machine means that objects from the environment may be between the sensor and the safety-related object, partially or completely obscuring it. Since these obstructing objects in the agricultural sector can be organic, such as crops, the variance of the physical properties is very high and changes depending on the time of observation. The third challenging aspect of mobility is that the perception system is exposed to the same irregularities as the machine, for example when the terrain is uneven. This results in completely different viewing angles for the perception system [27,28,29].

3.1.3. Environment Related Influences

The two above-mentioned challenges are combined with the problems of the harsh outdoor environment conditions. As the entire scene in the sensor ROI can be disturbed by environmental influences. These influences can be weather-related, such as fog or rain, or process-related, such as dust, mud, etc. [9,10].

3.2. Terminology Used to Describe the Detection Capability

In the following, a terminology is proposed below that expresses the sensor detection capability characteristics that are relevant to meet the machine-related challenges. This terminology serves as the necessary expression link between machine and sensor manufacturers, by mapping machine risk assessment requirements to detection capability requirements.

3.2.1. Detection Goal

The detection goal can be most easily described as the safety-related object that the machine has to avoid. The primary entities that are considered are people or animals, given the reference to safety. Subsequently, static obstacles such as trees and negative obstacles such as ditches are also included. The detection goal is therefore a single object or a comprehensive set of disparate objects, including living beings and even environmental configurations.

3.2.2. Detection Information

The detection information delineates the protocol of information exchange between the machine and its environment perception system in the event of a perceived object detection. The following expressions are intended for machine designers and should be considered in conjunction with the risk assessment. It is important to consider both the type of perception and the degree of information depth. This information can be given in differing levels of detail and be separated into two dimensions. For a better understanding, Figure 8 shows three examples to illustrate that the detection information can vary for the same perception task of the detection goal. The graph is based on an example in which a person is the detection goal.
  • Type of Perception
The type of perception describes the level of detail in which the detection goal must be perceived in order to initiate detection information to the machine. The assessment of whether an object is within the ROI can serve as an initial typification, which could ultimately result in the classification of the object. Using the example of a person, if this person were to be detected by a 2D laser scanner, the scanner would perceive a detection that looks like a line. A radar scanner may also perceive the speed of the detected object, and a camera, for example, can evaluate the classification of the object. The highest degree of perceptual resolution would be the classification of the object and its moving direction.
  • Degree of Information Depth
The depth of the information that is passed to the machine is the machine-related criteria for the detection information. The information has to be defined by the machine manufacturer and describes how detailed the information is in case of a perception of a detection goal. Using the person example, the lowest level of information passed to the machine can be that there is or is not an object in the sensor’s Region Of Interest (ROI). The next step would be the information about the size of the object and finally the direction of movement of the object.
It should be noted that the resolutions are not interdependent, as a perception system can detect and classify an object as a person and only give the machine the information of a field evaluation of its ROI. In these terms, the instances of Figure 8 will be explained. The requirements for X 1 state that a sensor should transmit the corresponding binary signal for a person classified in the sensor’s ROI. In the case of a requirement like X 2 , the type of field evaluation is almost insignificant. However, in this case, the position in the ROI and the proportions should be transferred as safety-related information. For the example X 3 , the task is to classify a moving person and transfer all relevant information to the machine, including proportions, position in the field, and direction of movement.

3.2.3. Environment Noise

In addition to the novelty of the proof of detection capability required, the physical description of interferences that potentially reduce the detection capability is also a new problem. It is, therefore, challenging to prove the reliability of a detection by a sensor system to unclassified interferences. However, using the example of IP codes, which define the design of enclosures against environmental influences like particles, there is a need for standardized classification of the environmental influences that can affect the performance of a perception system [19,30]. In principle, environmental influences can be described in a systematic manner. However, the specific outcome of a given situation cannot be predetermined and, as a result, must be described as random. For a clearer understanding, the authors propose to refer to the potential environmental interference that can affect the sensor’s capability to perceive the detection goal as ‘Environment Noise’. Because of the similarity to noise in the transmission of an information signal, environment noise can falsify the received information. Meltebrink et al. described this lack of definition and used in order to compare the sensor’s detection capability in relation to the environment, classes based on established principles such as the Beaufort Scale and weather classes from the Marines Handbook [14,31]. This at least means that quantifiable statements about environment noise can be made, for example, the environment noise regarding precipitation for the measurement was rain class 4. The proposed environment classes will be used in the following section to quantify environment noise.

4. Parameters of Interest for Real Environment Detection Capability

The challenging task regarding the safety validation is the proof the safety-related task can be performed under all thinkable conditions, leading to the considerations of the worst-case scenario that could happen. The definitive constellation of the worst-case scenario is in outdoor areas not predictable, as shown in Section 3.1. Therefore, the approach taken here is to decompose scenarios that have been considered safety-critical during the risk assessment into its parameters. This allows for a structured description of the scenario and offers the testing of the detection capability for these specific worst-case parameters.

4.1. Proposal for a Parameter-Based Description of the Environment

In Figure 9, a safety-critical situation is illustrated and used as an example to explain the principle behind the proposed parameter-based description of the environment.
To illustrate the suitability of a formalism, the example is expressed for a camera and a laser scanner. The center of the figure shows the exemplary real-world situation from the perspective of the automated machine facing a dust/fog environment where a person is inside a crop field and deers crossing the machine’s way. The blue dashed frame can be described as the field of view of the camera. On the ground, the hazard and warning zone are shown, which represents the set ROI edges of the laser scanner. What now has to be evaluated is the sensor’s detection capability to perceive the safety-critical object. To evaluate this, the relevant parameters have to be elaborated out of the scene. These are the worst-case parameters of the detection goal for the perception system, which would be in the case of the camera the shape and in the case of the laser scanner, the remisssion, and the shape. These parameters, i.e., G Shape , G Remission , can be concluded in the matrix G P . This matrix, therefore, represents the actual properties of the real-world detection goal. Between the sensor and the safety-critical object is the environment noise, which is in this case vegetation and dust. The environment noise parameters are saved in the matrix E P as E Dust , E Vegetation . Therefore, the matrices are lists that inhere the safety-related parameters that have to be evaluated for the detection capability. The third aspect that is now relevant is the sensor configuration, as a different setup probably has effects on the results. Therefore, the matrix S P for the laser scanner contains parameters such as the technology used, the latencies, and the dimension of its ROI: S LiDAR , S Latency , S ROI . The exemplary real-world situation can now be described with the matrices ( G P , E P ) , while G P G and E N E . G and E represent all variances of parameters that are possible in the real world. Thus, a sensor frame of a specific situation at a certain time can be summarized in the following parameter-based description: ( S P , G P , E P ) . The goal is to validate the detection capability for these worst-case parameters of such frames.

4.2. Parameter-Based Test Designs

The SRS architecture permits the description of the safety-related system as a black box. The output in relation to the perception task is the variable of interest, and thus, a test stand that evaluates the detection capability must emulate the machine’s control unit. As previously indicated in Section 4.1, the scenario under test must be described in terms of its parameters. To be more precise, the test parameters can be derived from the matrices ( G , E , S ) . This is because a real situation is an overlap of parameter intersections that can be defined by ( G P G , E P E , S P S ) . Due to the complex environment, the goal of the test design must be to keep as many parameters constant as possible, which allows a more precise evaluation of the influence of a single parameter—or at least a few parameters—on the result of the detection capability. In general, there are three possibilities to test the perception capability of an ODS, which are presented below.
  • Isolated Real-World Situation
The Agro-Safety test stand, on which the REDA method was developed, serves as an exemplary model for isolated real-world tests [9,14]. The pioneering work demonstrating detection capability has already inspired other test stands and practical approaches [10,32].
  • Artificial Environment Noise
The second option is to create an artificial environment. To do this, data from real-world test stands must be used to create a measure of the influence of different environment noise. The MOT (Minimum Object-Detectable Transmittance) [22] value is based on test data from this type of test.
  • Full Parameter Simulation
The next step is then to keep all parameters artificial and create a virtualized test surrounding in which environmental noise can be simulated and the ODS behavior with it.
 
The matrices of the test situation can then be formulated in an analogous way to Figure 9. However, it must be clear if parameters are tracked or evaluated. For example, a fixed test parameter of the detection goal (e.g. velocity) can be documented as G velocity , where G velocity G P . At this point comes the conformity of the REDA method with the IEC/TS 62998-1. Figure 10 illustrates how the REDA method meets the definitions of IEC/TS 62998-1. The core of the illustration is the proposed SRS-Architecture, introduced in Figure 4. The key information that has to be validated is the safety-related information that is given to the machine. In order for each REDA method to conform to the measurement, the test computer used for data storage can be viewed as an emulation of the autonomous machine control unit. In addition to the safety-related information, the database where the data are stored holds several test parameters. In Figure 10, the test computer is represented by the orange box in the figure. The several test parameters are represented by the three arrows (gray, blue, and green) that are also given as parameter data to the ‘emulator’. The detection goal parameter G P are represented by a gray arrow. The configuration of the ODS has to be respected in the evaluation as well, meaning that these parameters have to be tracked for the measurement. The SRS configurations are saved in the database in the matrix S P . The third parameter is that of the environment which is saved as E P . The totality of the recorded data of a measurement in combination with the parameters of the measurement results in a data set that holds a reproducible data collection of the test situation and allows for data evaluation in the post-processing.

4.3. Safety Parameters for the Detection Capability

As shown in the introduction (Section 1) when the detection capability is safety-related, it should be expressed which availability and reliability the application of this sensor will bring to the autonomous machine. Therefore, the scores should be designed to directly express the safety-related characteristics of the machine that the specific sensor’s use would imply. To conclude, for these scores, an overall suitability should be expressed, which leads to the third score.
  • Usability
The Usability-Score should provide an overview of the general use of the sensor for the autonomous machine.
  • Reliability
In the case of machine design, it is important to know how good the reliability of the machine would be if the environment perception system is applied to the machine. This leads to the Reliability-Score, which gives an overview of how reliable the sensor will detect safety-related objects despite the environment noise in which the autonomous machine is planned to operate.
  • Availability
The second safety-related value that a machine designer is probably interested in is knowing the availability of the machine when the sensor system is applied to the machine. Therefore, the third Score is the Availability-Score, which gives an overview of the availability of the autonomous machine when the ODS is used on the application. Availability is not a direct safety-critical problem but arises when the machine has too much downtime due to constant false detections of safety-related objects that are not there. The potential danger in this situation is that the owner of the autonomous machine will start bypassing the safety-related perception system to reduce the machine’s downtime.

4.4. Error Evaluation

As shown in Section 2.3.3, the application of the confusion matrix demands the actual outcome and the prediction value. This demands that the ground truth is known, and this ground truth is related to the errors that could occur during a measurement. Those errors can occur due to systematic or random influences. To explain how these errors could have affected the REDA measurements presented in Section 2.3, the development of a REDA point cloud is illustrated abstractly in Figure 11. The first vector shows the detection goal properties such as relative position, relative velocity, and the movement direction. By connecting this vector with the detection information vector, the relative positions can be connected with the detection information the sensor signaled at this time. The resulting matrix now contains the REDA evaluation data. As shown in Section 2.3.1, it is relevant to evaluate the first detection after the first entry therefore the evaluation matrix is filtered by the direction of the velocity vector, which leads to the illustrated top view, where only data points are plotted with the same direction.

4.4.1. Systematic Error Evaluation

The systematic errors that arise from a test setup can correlate to the test object speeds and latency times of an ODS. The latencies have to be considered in any safety application as it makes a difference if a sensor is reliably classifying every object but needs 1 s for this classification. If this sensor is applied to a mobile machine that is moving with 25 km/h, the traveled distance between detection and signal would be 6.94 m. Taking these into account, it is possible to calculate the initial detection during a test movement and incorporate it as eSDA in the data evaluation. The following section describes the origin of the eSDA and how it considers the systematic errors of a REDA test that are related to the SDA of the sensor and its boundaries.
  • expected Specified Detection Area (eSDA)
The implementation of the eSDA allows for the accurate evaluation of a sensor signal recorded at a specific target location at a specific timestamp. This is because the boundaries of the eSDA area are determined by the physical relationships of the test situation. Figure 12 illustrates how an eSDA can be described. Assuming a perpendicular entry into the SDA by a target, the first detection is transferred to the machine with a certain latency. In a test situation where the aim is to simulate a real situation, this aspect can be used to verify or falsify the result of a sensor based on all test parameters. Figure 12 illustrates an example of the systematic edge displacement that is the base principal for the development of the eSDA. In the given example an object is moving from the front into the SDA. The relative velocity can be caused by the velocity of the ODS ( v ODS ), by the velocity of the Target ( v Target ), or because of the sum of both. Due to the physical relationship (1), shown below, the velocity is the relative velocity between the ODS and the test object:
s = v ( t 2 t 1 ) .
Therefore, the relative velocity corresponds to the amount of the motion-filtered velocities of the sensor and test specimen. The time t depends on the sensor setting. This time t ODS   Parameter is equal to the difference t 2 t 1 (see Figure 12). This results in the relationship shown in Equation (2):
s systematic = v relative t ODS Setup .
The calculation of the edge displacement based on s systematic allows the creation of an eSDA. Accordingly, the eSDA describes the range in which a sensor should be able to detect the safety-related object based on the test setup and its calibration.

4.4.2. Random Error Evaluation

Random errors occur due to unpredictable constellations on the test stand, as it can not be assured that an artificial detection target is always ideal standing still or has the ideal shape to determine exactly the first entry of the SDA.
  • Tolerance Zone
The error consideration is incomplete without accounting for random errors in a test setup. Thus, a tolerance must be applied to the initial edge shift of the eSDA calculation. This confidence interval should consider the peculiarities of the test stand setup, such as the asymmetrical proportions of a test object that result in different initial detection times depending on the scan plane (in the case of laser scanners). The zone designated as the tolerance zone is derived from the eSDA, but it affects the REDA point cloud by eliminating data points that fall within the specified tolerance zone. This is due to the test design. It is inadvisable to consider data points located at the periphery of the eSDA, given the possibility that a random error may have occurred at the time of data recording. The Tolerance zone shown in Figure 13 surrounds the edges of the eSDA, as it can not be said for sure if some parts of the test object were or were not already inside the expected area.

5. Validation Scores of the Real Environment Detection Capability

As the REDA point cloud is random error accounted for and the overlayed eSDA respects the systematic errors of a REDA measurement, the validation scores for the Real Environment Detection can be evaluated. Such quality scores must be designed in such a way that the numbers can be easily understood and formulated in a machine-designer-oriented manner. This resulted in the three new REDA-Scores: Reliability-, Availability- and Usability-Score. These scores will be applied and explained in the following.

5.1. Classification for the Parameter-Based Real Environment Detection Capability

The confusion matrix is comprised of two dimensions: the actual outcome and the prediction. The subsequent paragraphs will describe each of these dimensions in the context of the intended classification.
  • Outcome: Random error accounting REDA point cloud;
  • Prediction: expected Specified Detection Area (eSDA).
The outcome is the result of the measurement, which in the case of a REDA measurement is the REDA point cloud. Once the tolerance zone has been applied to the point cloud, it can be confirmed that the measured signal from the ODS is a direct result of detection or non-detection, and not due to the randomly moving detection goal. This outcome creates the base for the application of the prediction value above it. The eSDA creates the prediction, as it marks the area where the ODS should always detect the detection target. The principle is shown in Figure 14. Each data point is represented as 1 and 0, and with the introduction of the eSDA, these recorded sensor signals can be verified or falsified. This is realized by the order of every detection or non-detection in the four-field matrix that is shown next to the schematic point cloud. The goal is to analyze whether the relative position of the test object, which is linked by a timestamp as mentioned above, was inside or outside the eSDA. The four-field matrix is structured in a way that the information about the sensor signal is classified at the top or bottom of the matrix, and the position of the test object determines whether the individual detection is classified to the left or right of the matrix based on its location inside or outside of the eSDA. This results in four possible classifications for each of the detections, listed and explained below.
  • TP (True Positive): Expected Detection
The signal for detection was present at the sensor and the coordinate of the test object is within the eSDA. This expected Detection is visualized with a green dot.
  • FP (False Positive): Unexpected Detection
The sensor has signaled a detection, but the relative coordinate of the test specimen is outside the eSDA. The unexpected Detection is visualised with a magenta-coloured dot.
  • FN (False Negative): Unexpected non-detection
The sensor does not perceive a field violation but the coordinate of the test object is within the eSDA. The unexpected non-detection is visualized with a red dot.
  • TN (True Negative): Expected non-detection
The signal for a non-detection was present at the sensor and the coordinate of the test object is outside the eSDA. The expected non-detection is visualized with a black dot.

5.2. Real Environment Detection Score (RED-Score)

As formerly described this suitability should be expressed in three ways, i.e., availability, reliability, and usability. These scores have to be mapped on the evaluation options that are offered by the confusion matrix. The evaluation is shown in Figure 15. How the mapping is justified is explained for each score individually below. The values are not real and have been evaluated from the previously given abstract and movement-isolated REDA point cloud.
  • Reliability-Score
The Reliability-Score indicates a measure of the reliability of the autonomous machine if the ODS is applied. The formula for calculating the value is based on the recall score and shown in Equation (3). The recall score was used because it is important to evaluate how reliable the monitoring of the eSDA was performed. This means any non-detection inside the eSDA decreases the reliability score. Those non-detections inside the eSDA can be safety-critical because these non-detections imply the ODS failure to detect the safety-related object.
R e l i a b i l i t y S c o r e = T P T P + F N × 100
  • Availability-Score
The Availability-Score is based on the specificity. This means that for the evaluation, all data points are evaluated that are outside of the eSDA. Any unexpected detection outside of the eSDA leads to a decrease of the machine’s availability. This availability should be understood for the autonomous machine on which the ODS is intended to be applied. The Availability-Score is calculated as described in (4). Deviations of the score from 100% indicate more unexpected detections recorded by the sensor:
A v a i l a b i l i t y S c o r e = T N T N + F P × 100
  • Usability-Score
The Usability-Score (5) generally evaluates the complete classified amount of data points. This describes the sensor’s usability in this measurement by dividing the sum of expected detection and expected non-detection by the total number of Real Environment Detection. The score is based on the accuracy score because usability only should conclude the reliability and availability score. This can be achieved by the accuracy score, which provides more detailed information about the number of true positives and true negatives, which is relevant for the overall usability of the system. A higher Usability-Score indicates higher suitability of the ODS for the autonomous machine:
U s a b i l i t y S c o r e = T P + T N T P + T N + F P + F N × 100
It is important to read the scores in context. For instance, the scores can be interpreted as follows: a sensor configured like S P has proven capable of reliably detecting the worst-case parameters G P of the detection goal in an environment where the environment noise constellation was E P in 84.44% of the detection tasks. The application of this sensor on an autonomous machine that is working in such an environment can lead to an availability of the machine of 90.96%. The overall usability of this sensor on this machine is evaluated to be 88.67% These REDA-Scores can supply the configuration to design an environment perception system for the autonomous machine by arguing with the gained knowledge that the ODS’ detection capability is functional under the occurring environmental noise of the autonomous machine.

5.3. REDA-Database

The validation data resulting from the REDA measurements must ultimately be prepared in such a way that a machine developer can see at a glance which ODS is most suitable for their autonomous machine. Therefore, the authors suggest the creation of a REDA database. The REDA database contains the data and REDA scores of different test benches and sensors. The machine developer can then make an initial preselection of their ODS setup for their machine using the options for setting various filters, such as Fog level 6 suitability. The information that should be gathered in the REDA database is illustrated in Figure 16. The measurements are listed in the rows of the table, and the columns contain all the information relevant to the evaluation. The color scheme is the same as in Figure 10 and refers to the fact that the relevant parameters are saved in the columns. The calculated REDA-Scores are also stored in these columns. By using filters in the columns, the optimal score evaluation for the desired conditions can now be determined.
An example of the usage of the REDA-Database is given by selecting a sensor for the example environment perception system that was proposed in the introduction. The designer of the machine has now the opportunity to filter the REDA-Database for sensors that have to perform the perception task of a human being in a dust environment. In the example case, two sensors will be proposed, as shown in Table 1.
The machine designer is now in a position to evaluate the most suitable sensor for its application case. Sensor A is the optimal choice in terms of overall performance and reliability. Sensor B, on the other hand, is weaker overall but has a high availability. The application case is now subject to evaluation. In the case of an autonomous machine, it is recommended that Sensor A, which exhibits the highest reliability, is employed. However, in the context of an assistive system where a human operator remains involved, Sensor B may be a more suitable option, given that its availability score is higher.

6. Conclusions

The introduction of (highly) automated machines into practice reaches its limits due to the lack of regulations for their safe environment perception. Therefore, providing a solution to evaluate perception systems for a highly automated machine will enable their commercial use. As shown in this paper, the validation of reliable environment perception is fundamentally dependent on proving the detection capability of a perception system. However, the methods to prove or measure detection capability are rare, as are accepted assurance procedures. Nevertheless, the novel REDA method provides a solution to evaluate the strengths and weaknesses of a sensor system in terms of detection capability. This method has been applied and its REDA-Scores have been extended to address the lack of safety-related scores regarding detection capability. Additionally, the requirements of ISO 18497 are respected here, resulting in the resolution that the sensor-specific ROIs are evaluated. This complies with the standard, as the requirement is for safe and reliable monitoring of the so-called hazard and warning zone of the autonomous machine. The resulting scores are aimed to give a quality characteristic about the detection capability inside the sensor’s specific ROI. The goal of these safety scores was to describe this detection capability in a machine-oriented way. The first score is called the Usability-Score, which gives an overview of the overall suitability of the sensor for the machine. The Availability-Score creates an overview of the resulting availability of the machine when the sensor is used on the autonomous machines, and therefore, in its typical environment. This score addresses the challenge of too many detections resulting in increased downtime and possible bypassing of the safety-related environment perception system. The last score is the Reliability-Score, which gives an overview of how reliable the sensor will detect safety-critical objects despite the environmental influences in which the autonomous machine is planned to operate. The benefit of these scores is that they also comply with the IEC/TS 62998-1, which offers a validation of environment perception systems used for the protection of people regarding systematic capabilities. The accordance within the standards presented SRS architecture is shown in Figure 4 (Section 2.1) and Figure 10 (Section 4.2). In addition, the point-by-point detection evaluation presented here for the first time in this paper enables point-based detection evaluation to be realized. This evaluation enables a frame-specific evaluation of the detection capability for a sensor across all technologies. To support this evaluation, the expected Specified Detection Area (eSDA) and the random error accounting REDA point cloud are introduced, which allows systematic and random error evaluation of the new REDA method. In conclusion, the REDA method is proposed to validate the safety and reliability at the sensor level first. To underline this, the taxonomy of Stellet et al. is used to describe the extent to which the REDA method fulfills this taxonomy need [26]:
‘A statement on the system-under-test (test criteria) that is expressed quantitatively (metric) under a set of specified conditions (test scenario) with the use of knowledge of an ideal result (reference).’
The REDA method provides a generic approach to all four required properties.
  • Test criteria
The statement to the system (the sensor) under test is the detection capability, which can also be formulated as the sensor’s capability to detect a safety-related object despite environment noise.
  • Metric
The metrics are the REDA-Scores which evaluate the detection capability independent of the sensor technology. Currently, we have the Real Environment Detection Area (REDA) and the Real Environment Detection Matrix (REDAM). From these, several REDA-Scores have been applied that give information about the quality of the REDA or, in the case of the REDAM, can be used to compare or to give recommendations for sensor fusion setups.
  • Test scenario
The scenario depends on the test bench. It has to be underlined that a scenario is derived from the autonomous machines risk assessment. The identified hazards present the requirements for the detection task of the sensor system.
  • Reference
The reference of the REDA method is the Specified Detection Area (SDA), the eSDA, and the Tolerance Zone. These areas are based on the individual ROI of a sensor, but viewed from the top, which makes the ROI describable in a two-dimensional area.

7. Outlook

In the autonomous machine design process, it is fundamental to know which sensor has the best detection capability to reliably monitor the warning and hazard zones in the presence of the environment’s noise. A measurable and statistically formulated evaluation of the detection capability, therefore, has to be established in the long term and meaningfully taken into account in safety certifications. This means that the detection capability has to be equally considered and comply with common safety approaches. Adhering to the safety approach means the need for measures that can be referred to in practice, such as the Performance Level (PL) which demands measurable performances of a system such as the MTBF (Mean Time Between Failure) value. The validation assessments developed in this paper are able to establish such a link between safety regulations and machinery directives. It should be further evaluated how the safety-related scoring should be mapped, to describe which quality of detection capability qualifies for which PL/AgPL. The approach of Volk et al. is worthy of consideration here, as they proposed a classification range [26]. This could result in formulations such as the required Reliability-Score for PL C being > 90 % . Additionally, the presented scores are not relevant to the application if the evaluation and quantification of possible environmental noise are not standardized in parallel. Otherwise, the detection capability cannot be adequately expressed, since the quality of the detection capability is only of interest to an autonomous machine working in outdoor environments if the dependency of the environment can also be formulated. Finally, as shown in the paper, the REDA method is suitable for proving detection capability. This allows the formulation of the claim to test more sensor systems on the existing test benches and evaluate their detection capability, as well as the option to create more test benches to test other scenarios. To do this in a more targeted way, a more standardized description of possible safety-critical situations would simplify the construction of test stands and test procedures for the detection capability. This can be completed in the analogy of the automotive sector which defines the Operational Design Domain (ODD).

Author Contributions

Conceptualization, M.K., C.M., S.E., Y.Z. and M.V.; methodology, M.K., C.M., S.E. and M.V.; software, M.K.; validation, M.K., C.M., S.E., Y.Z., M.V. and S.S.; data curation, C.M., S.E., Y.Z. and M.V.; writing—original draft preparation, M.K.; writing—review and editing, M.K., C.M., S.E. and S.S.; visualization, M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the research project “agrifoodTEF-DE” (funding program Digitalisierung in der Landwirtschaft, grant number 28DZI04C23) funded by the Federal Ministry of Food and Agriculture (BMEL) based on a decision of the Parliament of the Federal Republic Germany via the Federal Office for Agriculture and Food. The data utilized for the analysis were provided by SICK AG, where some of the authors were employed at the time this paper was written. The initial data were collected in the context of the research project ’AGRO-SAFETY’, which was funded by the German Federal Ministry of Education and Research (funding program Forschung an Fachhochschulen, grant number 13FH642IX6) and B. Strautmann & Söhne GmbH u. Co. KG.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

We would like to express our gratitude to our colleagues from SICK AG for their invaluable insights and expertise in sensor parameterization, which has enabled us to systematize error consideration in REDA measurements.

Conflicts of Interest

Author Magnus Komesker, Christian Meltebrink, Stefan Ebenhöch, Yannick Zahner and Mirko Vlasic were employed by the company SICK AG. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AgPLAgricultural Performance Level
eSDAexpected Specified Detection Area
MTBFMean Time Between Failure
ODDOperational Design Domain
ODSObject Detection System
PLPerformance Level
REDReal Environment Detection
REDAReal Environment Detection Area
REDAMReal Environment Detection Area Matrix
ROIRegion of Interest
SDASpecified Detection Area
SRSSafety Related Sensors

References

  1. Aby, G.R.; Issa, S.F. Safety of Automated Agricultural Machineries: A Systematic Literature Review. Safety 2023, 9, 13. [Google Scholar] [CrossRef]
  2. Ruckelshausen, A. Robotik und Sensortechnik. Inform. Spektrum 2023, 46, 8–14. [Google Scholar] [CrossRef]
  3. Adler, R. Making Agriculture Sustainable with AI and Autonomous Systems While Keeping Safety in Mind. Available online: https://www.iese.fraunhofer.de/content/dam/iese/publication/smart-farming-agriculture-sustainable-fraunhofer-iese.pdf (accessed on 3 May 2024).
  4. Robert, K.; Elisabeth, Q.; Josef, B. Analysis of occupational accidents with agricultural machinery in the period 2008–2010 in Austria. Saf. Sci. 2015, 72, 319–328. [Google Scholar] [CrossRef]
  5. ISO 18497:2018-11; Agricultural Machinery and Tractors—Safety of Highly Automated Agricultural Machines—Principles for Design. ISO: Geneva, Switzerland, 2018.
  6. ISO 25119:2018 (Parts 1–4); Tractors and Machinery for Agriculture and Forestry—Safety-Related Parts of Control Systems. ISO: Geneva, Switzerland, 2018.
  7. ISO 13849-1; Safety of Machinery—Safety-Related Parts of Control Systems—Part 1: General Principles for Design. ISO: Geneva, Switzerland, 2023.
  8. Tiusanen, R.; Malm, T.; Ronkainen, A. An overview of current safety requirements for autonomous machines—Review of standards. Open Eng. 2020, 10, 665–673. [Google Scholar] [CrossRef]
  9. Meltebrink, C.; Ströer, T.; Wegmann, B.; Weltzien, C.; Ruckelshausen, A. Concept and Realization of a Novel Test Method Using a Dynamic Test Stand for Detecting Persons by Sensor Systems on Autonomous Agricultural Robotics. Sensors 2021, 21, 2315. [Google Scholar] [CrossRef]
  10. Lee, C.; Schätzle, S.; Lang, S.A.; Oksanen, T. Design considerations of a perception system in functional safety operated and highly automated mobile machines. Smart Agric. Technol. 2023, 6, 100346. [Google Scholar] [CrossRef]
  11. ISO/PAS 21448:2019-01; Road Vehicles—Safety of the Intended Functionality. ISO: Geneva, Switzerland, 2019.
  12. ISO (2022): ISO 21815-1:2022(en); Earth-Moving Machinery—Collision Warning and Avoidance—Part 1: General Requirements. ISO: Geneva, Switzerland, 2022.
  13. Bozkurt, M.; Herrmann, A.; Woppowa, L. Vernetzungs-und Transferprojekt zur Digitalisierung in der Landwirtschaft DigiLand—Teilvorhaben VDI e.V. VDI-Fachbereich Max-Eyth-Gesellschaft Agrartechnik. VDI Verein Deutscher Ingenieure e.V., 20 May 2022. Available online: https://www.vdi.de/fileadmin/pages/vdi_de/redakteure/ueber_uns/fachgesellschaften/TLS/dateien/Agrartechnik/Auswertung-der-VDI-Mitgliederumfrage-Vernetzungs-und-Transferprojekt-zur-Digitalisierung-in-der-Landwirtschaft-2022.11.04.pdf, (accessed on 24 January 2024).
  14. Meltebrink, C.; Komesker, M.; Kelsch, C.; König, D.; Jenz, M.; Strotdresch, M.; Wegmann, B.; Weltzien, C.; Ruckelshausen, A. REDA: A New Methodology to Validate Sensor Systems for Person Detection under Variable Environmental Conditions. Sensors 2022, 22, 5745. [Google Scholar] [CrossRef]
  15. IEC 62998-1:2019; Safety of Machinery—Safety-Related Sensors Used for the Protection of Persons. IEC: Geneva, Switzerland, 2019.
  16. Meltebrink, C. New approach for safe environmental perception: A key technology for highly automated and autonomous machines. In LAND.TECHNIK AgEng 2023, The Forum for Agricultural Engineering Innovations, Hanover, Germany; VDI Verlag: Düsseldorf, Germany, 2023; pp. 227–233. ISBN 9783181024270. [Google Scholar]
  17. IEC 61496; Safety of Machinery—Electro-Sensitive Protective Equipment—All Parts. International Electrotechnical Commission: Geneva, Switzerland, 2020.
  18. Hoss, M.; Scholtes, M.; Eckstein, L. A Review of Testing Object-Based Environment Perception for Safe Automated Driving. Automot. Innov. 2022, 5, 223–250. [Google Scholar] [CrossRef]
  19. Stellet, J.E.; Zofka, M.R.; Schumacher, J.; Schamm, T.; Niewels, F.; Zollner, J.M. Testing of Advanced Driver Assistance Towards Automated Driving: A Survey and Taxonomy on Existing Approaches and Open Questions. In Proceedings of the 2015 IEEE 18th International Conference on Intelligent Transportation Systems (ITSC 2015), Gran Canaria, Spain, 15–18 September 2015; pp. 1455–1462, ISBN 978-1-4673-6596-3. [Google Scholar]
  20. Vargas, A.P.S.; Berducat, M. Tests to evaluate agricultural technologies with embedded AI via AgrifoodTEF. In Proceedings of the AGRITECHDAYS, Rennes, France, 12 October 2023. [Google Scholar]
  21. Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and sensing for autonomous vehicles under adverse weather conditions: A survey. ISPRS J. Photogramm. Remote Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
  22. Sumi, Y.; Kim, B.K.; Kodama, M. Evaluation of Detection Performance for Safety-Related Sensors in Low-Visibility Environments. IEEE Sens. J. 2021, 21, 18855–18863. [Google Scholar] [CrossRef]
  23. Christian, M.; Marvin, S.; Benjamin, W.; Cornelia, W.; Arno, R. Humanoid test target for the validation of sensor systems on autonomous agricultural machines. Agric. Eng. Eu 2022, 77. [Google Scholar] [CrossRef]
  24. Darms, M. Data Fusion of Environment-Perception Sensors for ADAS. In Handbook of Driver Assistance Systems: Basic Information, Components and Systems for Active Safety and Comfort, Aufl. 2016; Springer International Publishing: Cham, Switzerland, 2016; pp. 549–566. ISBN 978-3-319-12352-3. [Google Scholar]
  25. Badithela, A.; Wongpiromsarn, T.; Murray, R.M. Evaluation Metrics for Object Detection for Autonomous Systems. arXiv 2022, arXiv:2210.10298. [Google Scholar]
  26. Volk, G.; Gamerdinger, J.; Bernuth, A.; von Bringmann, O. A Comprehensive Safety Metric to Evaluate Perception in Autonomous Systems. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020. [Google Scholar]
  27. Beycimen, S.; Ignatyev, D.; Zolotas, A. A comprehensive survey of unmanned ground vehicle terrain traversability for unstructured environments and sensor technology insights. Eng. Sci. Technol. Int. J. 2023, 47, 101457. [Google Scholar] [CrossRef]
  28. Shutske, J.M.; Sandner, K.J.; Jamieson, Z. Risk Assessment Methods for Autonomous Agricultural Machines: A Review of Current Practices and Future Needs. Appl. Eng. Agric. 2023, 39, 109–120. [Google Scholar] [CrossRef]
  29. Martins, J.J.; Silva, M.; Santos, F. Safety Standards for Collision Avoidance Systems in Agricultural Robots—A Review. In ROBOT2022: Fifth Iberian Robotics Conference; Tardioli, D., Matellán, V., Heredia, G., Silva, M.F., Marques, L., Eds.; Springer: Cham, Switzerland, 2023; pp. 125–138. ISBN 978-3-031-21064-8. [Google Scholar]
  30. Berk, M.J. Safety Assessment of Environment Perception in Automated Driving Vehicles. Ph.D. Thesis, Technische Universität München, Munich, Germany, 2019. [Google Scholar]
  31. U. Meteorological Office. The Marine Observer’s Handbook, 11th ed.; H. M. Stationary Office: London, UK, 1996; Volume 2, ISBN 0-11-400367-X.
  32. Krause, C.; Iqbal, N.; Hertzberg, J.; Höllmann, M.; Martinez, J.; Nieberg, D.; Ruckelshausen, A.; Stiene, S.; Röttgermann, S.; Müter, M.; et al. Concept of a Test Environment for the Automated Evaluation of Algorithms for Robust and Reliable Environment Perception. In Proceedings of the TECHNIK 2022 The Forum for Agricultural Engineering Innovations, Online, 25 February 2022; VDI-Berichte 2395. pp. 177–184. [Google Scholar]
Figure 1. Agricultural machinery has to work in harsh environmental conditions as shown in the picture. The detection of living beings in this environment is challenging, even for humans. With the emergence of autonomy, when the monitoring of a machine’s surroundings becomes its own responsibility, the defining feature of a sensor system for reliable environment perception is its detection capability (copyright by David Bacon).
Figure 1. Agricultural machinery has to work in harsh environmental conditions as shown in the picture. The detection of living beings in this environment is challenging, even for humans. With the emergence of autonomy, when the monitoring of a machine’s surroundings becomes its own responsibility, the defining feature of a sensor system for reliable environment perception is its detection capability (copyright by David Bacon).
Electronics 13 02396 g001
Figure 2. It is imperative that the occurrence of systematic and random errors in the hardware and software of the machine and its components be excluded or, at the very least, reduced to a minimum by the machine manufacturer. The figure illustrates this exclusion with red crosses, which indicate the exclusion of errors (lightning symbol) from consideration. These errors may occur either on the machine, the sensing unit, or in the bitstream between the machine and the sensor. The measures that must be taken are subject to the regulations of functional safety. However, when it comes to outdoor detection capability, the regulations of functional safety cease to apply, as the manufacturer must ensure that the environment, including vegetation, rain, or dust, does not affect the measurement result.
Figure 2. It is imperative that the occurrence of systematic and random errors in the hardware and software of the machine and its components be excluded or, at the very least, reduced to a minimum by the machine manufacturer. The figure illustrates this exclusion with red crosses, which indicate the exclusion of errors (lightning symbol) from consideration. These errors may occur either on the machine, the sensing unit, or in the bitstream between the machine and the sensor. The measures that must be taken are subject to the regulations of functional safety. However, when it comes to outdoor detection capability, the regulations of functional safety cease to apply, as the manufacturer must ensure that the environment, including vegetation, rain, or dust, does not affect the measurement result.
Electronics 13 02396 g002
Figure 3. This figure shows the warning and hazard zone required by ISO 18497 [5]. The hazard zone is an inner part of the warning zone and both can have various machine-specific dimensions in the three-dimensional space. However, these three-dimensional spaces can be compared by using a bird’s-eye view.
Figure 3. This figure shows the warning and hazard zone required by ISO 18497 [5]. The hazard zone is an inner part of the warning zone and both can have various machine-specific dimensions in the three-dimensional space. However, these three-dimensional spaces can be compared by using a bird’s-eye view.
Electronics 13 02396 g003
Figure 4. See IEC TS 62998-1:2019, Figure 2, reproduced with permission of DKE German Electrotechnical Commission, www.dke.de (accessed on 16 April 2024) [15]. The SRS consists of the sensing unit(s), the processing unit, and the input/output unit. The detection of safety-related objects must be ensured based on these three units. The objective is to find an SRS capable of detecting various safety-related objects while ensuring that any influence/interference between the sensor and the object does not affect the detection results.
Figure 4. See IEC TS 62998-1:2019, Figure 2, reproduced with permission of DKE German Electrotechnical Commission, www.dke.de (accessed on 16 April 2024) [15]. The SRS consists of the sensing unit(s), the processing unit, and the input/output unit. The detection of safety-related objects must be ensured based on these three units. The objective is to find an SRS capable of detecting various safety-related objects while ensuring that any influence/interference between the sensor and the object does not affect the detection results.
Electronics 13 02396 g004
Figure 5. When non-contact sensors are used to sense the environment on an autonomous agricultural machine, ISO 18497 [5] specifies characteristics of the required dimensions of the hazard and warning zones. The 360° coverage of the machine’s surroundings will be accomplished with one or more sensors. By using the SDA (Specified Detection Area), the performance of a sensor’s detection capability for its specific monitored space can be implied.
Figure 5. When non-contact sensors are used to sense the environment on an autonomous agricultural machine, ISO 18497 [5] specifies characteristics of the required dimensions of the hazard and warning zones. The 360° coverage of the machine’s surroundings will be accomplished with one or more sensors. By using the SDA (Specified Detection Area), the performance of a sensor’s detection capability for its specific monitored space can be implied.
Electronics 13 02396 g005
Figure 6. The images present two distinct visual representations of the same REDA measurement, recorded on the Agro-Safety test stand. (a) depicts the vectors derived from the measurement, plotted by location, with the current measured sensor signal marked in black or green. (b) illustrates the REDA extracted from all adjacent points in Figure a, related to the Specific Detection Area (SDA) of the Object Detection System (ODS). (reprinted from Ref. [14]).
Figure 6. The images present two distinct visual representations of the same REDA measurement, recorded on the Agro-Safety test stand. (a) depicts the vectors derived from the measurement, plotted by location, with the current measured sensor signal marked in black or green. (b) illustrates the REDA extracted from all adjacent points in Figure a, related to the Specific Detection Area (SDA) of the Object Detection System (ODS). (reprinted from Ref. [14]).
Electronics 13 02396 g006
Figure 7. The figure shows how REDAM can be used to visualize the system limits of a sensor system caused by environmental influences (e.g., rain) and offers an overlay of different sensor technologies to find the best sensor fusion. This can be achieved by stacking REDA with outranging environment conditions above each other, as illustrated in (a). The top view of the same stack, depicted in (b), can be utilized to identify the limiting environment condition (Reprinted from Ref. [14]).
Figure 7. The figure shows how REDAM can be used to visualize the system limits of a sensor system caused by environmental influences (e.g., rain) and offers an overlay of different sensor technologies to find the best sensor fusion. This can be achieved by stacking REDA with outranging environment conditions above each other, as illustrated in (a). The top view of the same stack, depicted in (b), can be utilized to identify the limiting environment condition (Reprinted from Ref. [14]).
Electronics 13 02396 g007
Figure 8. The detection information becomes relevant when the machine designer defines the level of detail that should be transferred to the machine in potential perception. This is mainly dependent on two dimensions, which can be combined in various ways. The illustration depicts a person as the detection goal. The graph illustrates that the type of perception can vary between a simple detection and the complete object classification. In a similar manner, the degree of information depth may vary considerably, encompassing such elements as field validation, concrete distance and size information, and classification accompanied by an estimation of the moving direction.
Figure 8. The detection information becomes relevant when the machine designer defines the level of detail that should be transferred to the machine in potential perception. This is mainly dependent on two dimensions, which can be combined in various ways. The illustration depicts a person as the detection goal. The graph illustrates that the type of perception can vary between a simple detection and the complete object classification. In a similar manner, the degree of information depth may vary considerably, encompassing such elements as field validation, concrete distance and size information, and classification accompanied by an estimation of the moving direction.
Electronics 13 02396 g008
Figure 9. This figure illustrates a real-world situation at a specific moment (green/gray frame) that is monitored by the machine’s environment perception system (blue dashed frame). The safety-related function is to detect the safety-related objects despite the possible environment noise. Given the infinite number of potential combinations of object and environment overlay, it is necessary to describe the situation in a parameter-based manner in order to express the sensor’s detection capabilities in relation to these parameters.
Figure 9. This figure illustrates a real-world situation at a specific moment (green/gray frame) that is monitored by the machine’s environment perception system (blue dashed frame). The safety-related function is to detect the safety-related objects despite the possible environment noise. Given the infinite number of potential combinations of object and environment overlay, it is necessary to describe the situation in a parameter-based manner in order to express the sensor’s detection capabilities in relation to these parameters.
Electronics 13 02396 g009
Figure 10. The figure demonstrates how an ODS detection capability can be tested using a test designed with the REDA method. The procedure also adheres to the IEC/TS 62998-1 SRS architecture.
Figure 10. The figure demonstrates how an ODS detection capability can be tested using a test designed with the REDA method. The procedure also adheres to the IEC/TS 62998-1 SRS architecture.
Electronics 13 02396 g010
Figure 11. The figure illustrates the genesis of a REDA point cloud. The point cloud may be generated by combining the detection goal parameter matrix of a REDA measurement with the measured detection information by the timestamp. The result can be derived from the sensor signal and the relative position. As illustrated in gray, these vectors likely contain additional data. However, for the purposes of this illustrative evaluation, the colored data is utilized.
Figure 11. The figure illustrates the genesis of a REDA point cloud. The point cloud may be generated by combining the detection goal parameter matrix of a REDA measurement with the measured detection information by the timestamp. The result can be derived from the sensor signal and the relative position. As illustrated in gray, these vectors likely contain additional data. However, for the purposes of this illustrative evaluation, the colored data is utilized.
Electronics 13 02396 g011
Figure 12. This figure shows how the isolated movement systematic edge displacement allows for the creation of the expected Specified Detection Area (eSDA).
Figure 12. This figure shows how the isolated movement systematic edge displacement allows for the creation of the expected Specified Detection Area (eSDA).
Electronics 13 02396 g012
Figure 13. The illustration depicts the application of the tolerance zone, which is derived from the edges of the eSDA. The tolerance zone is designed to accommodate random errors in a REDA measurement. The resulting area affects the REDA point cloud because it is not possible to ascertain with certainty whether the detection target may have moved due to other influences, and therefore, the sensor evaluated a field validation correctly.
Figure 13. The illustration depicts the application of the tolerance zone, which is derived from the edges of the eSDA. The tolerance zone is designed to accommodate random errors in a REDA measurement. The resulting area affects the REDA point cloud because it is not possible to ascertain with certainty whether the detection target may have moved due to other influences, and therefore, the sensor evaluated a field validation correctly.
Electronics 13 02396 g013
Figure 14. The figure shows how the overlay of the expected Specified Detection Area (eSDA) above the random error accounting REDA point cloud allows the classification of REDA measured data points in a confusion matrix. This makes it possible to classify the Real Environment Detection.
Figure 14. The figure shows how the overlay of the expected Specified Detection Area (eSDA) above the random error accounting REDA point cloud allows the classification of REDA measured data points in a confusion matrix. This makes it possible to classify the Real Environment Detection.
Electronics 13 02396 g014
Figure 15. This figure is an exemplary overview of how the evaluation allows to calculate the Availability-, Reliability-, and Usability-Scores.
Figure 15. This figure is an exemplary overview of how the evaluation allows to calculate the Availability-, Reliability-, and Usability-Scores.
Electronics 13 02396 g015
Figure 16. This figure shows the information that is stored in the proposed REDA-Database.
Figure 16. This figure shows the information that is stored in the proposed REDA-Database.
Electronics 13 02396 g016
Table 1. Exemplary score overview for the example case of the usage of the REDA Database.
Table 1. Exemplary score overview for the example case of the usage of the REDA Database.
Sensor ASensor B
Usability-Score94%91%
Reliability-Score99%87%
Availability-Score89%95%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Komesker, M.; Meltebrink, C.; Ebenhöch, S.; Zahner, Y.; Vlasic, M.; Stiene, S. Validation Scores to Evaluate the Detection Capability of Sensor Systems Used for Autonomous Machines in Outdoor Environments. Electronics 2024, 13, 2396. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics13122396

AMA Style

Komesker M, Meltebrink C, Ebenhöch S, Zahner Y, Vlasic M, Stiene S. Validation Scores to Evaluate the Detection Capability of Sensor Systems Used for Autonomous Machines in Outdoor Environments. Electronics. 2024; 13(12):2396. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics13122396

Chicago/Turabian Style

Komesker, Magnus, Christian Meltebrink, Stefan Ebenhöch, Yannick Zahner, Mirko Vlasic, and Stefan Stiene. 2024. "Validation Scores to Evaluate the Detection Capability of Sensor Systems Used for Autonomous Machines in Outdoor Environments" Electronics 13, no. 12: 2396. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics13122396

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop