Next Article in Journal
Field Inspection of High-Density Polyethylene (HDPE) Storage Tanks Using Infrared Thermography and Ultrasonic Methods
Next Article in Special Issue
Game Theory-Based Load-Balancing Algorithms for Small Cells Wireless Backhaul Connections
Previous Article in Journal
A New Technique for Determining the Shape of a Paper Sample in In-Plane Compression Test Using Image Sequence Analysis
Previous Article in Special Issue
Flexible Convolver for Convolutional Neural Networks Deployment onto Hardware-Oriented Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Declarative Application Framework for Evaluating Advanced V2X-Based ADAS Solutions †

by
András Wippelhauser
1,2,*,
András Edelmayer
2 and
László Bokor
1
1
Department of Networked Systems and Services, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Műegyetem rkp. 3., 3111 Budapest, Hungary
2
Commsignia Research, Irinyi József u. 4-20, 1117 Budapest, Hungary
*
Author to whom correspondence should be addressed.
This paper is an extended version of the paper published in Wippelhauser, A.; Bokor, L. A Declarative Rapid Prototyping ADAS Application Framework Based on the Artery Simulator. In Proceedings of the 45th International Conference on Telecommunications and Signal Processing (TSP), Virtual Conference, 13–15 July 2022; pp. 333–337. https://0-ieeexplore-ieee-org.brum.beds.ac.uk/document/9851233.
Submission received: 16 November 2022 / Revised: 12 January 2023 / Accepted: 13 January 2023 / Published: 20 January 2023

Abstract

:
Application of Vehicular Ad Hoc Networks (VANETs) aims to help in the solution of some problems that have arisen in road transportation systems via short-range, low-latency mobile communication. The application of V2X (Vehicle-to-Everything) communication technologies to the next generation of Advanced Driver Assistance Systems (ADAS) is essential to the extension of the operational design domain (ODD) of the systems to provide safe, secure, and efficient automated driving solutions. Due to the safety-critical nature of the problem, the large-scale testing of V2X enabled ADAS solutions to evaluate and measure the anticipated quality and functionality of the experimental system is of great significance. This article proposes a novel ADAS application prototyping framework, using declarative programming, built on top of the popular Artery/OMNeT++ simulator. The framework is capable of simulating V2X-enabled ADAS applications using accurate network simulation and realistic simulated traffic on real-world maps. The solution features XML descriptions for application specification. The sensor model of Artery is used to provide information to applications. By using the simulator, one can conclude the performance of the applications and discover locations, circumstances and design patterns, where design limits should apply.

1. Introduction

By the early 2000s, the trends in automotive development visibly changed. The priority of traditional driving functionality gave space to new design issues such as traveling safety, energy efficiency, and emission.
In order to comply with the above issues, an increasingly systemic approach, formerly not commonly adopted in vehicle design and operation, came to the fore. A vehicle (either traveling autonomously or supervised by a human driver) needs to be able to operate in a dynamically changing and varying environment and cope with rare and non-expected situations, including pedestrians and bicyclists on the road, accidents unfolding, presence of wildlife, adaptation to unpredictable weather conditions, etc. Given the above, it soon became apparent that two key driving features affecting the quality of vehicle handling and control needed to be significantly improved. They are (i) the performance of the decision-making capability of the driver (vehicle control) and (ii) the proper availability and quality of perception data collected from the interim driving environment.
Indeed, human decision making can be improved by assisting and supporting the human driver, including the more efficient management of human-related errors, moreover, the wide use of perception data and its feedback into drive control mechanisms became a primordial design quality in contemporary vehicles creating the required technology known as Advanced Driver Assistance Systems (ADAS).
The application of ADAS methodology combined with V2X promises a solution to some of the most challenging traffic problems nowadays [1,2]. Since ADAS can influence driving performance one can consider it safety-critical to create demands for an efficient and sufficiently detailed simulation framework to evaluate such applications and vehicle systems by using accurate network and sensor models. In this work, we propose a novel extension to the Artery/OMNeT++ simulator to support rapid prototyping of V2X-enabled ADAS applications with large-scale testing and evaluation capabilities in complex environments. Our solution features a flexible configuration scheme and an accurate sensor suite with realistic traffic and network models. To the best of our knowledge, no such system is currently available for the V2X community.
There are numerous VANET simulators and testing tools available for research purposes. A detailed summary of such systems is provided in [3]. In the scope of this work, we selected Artery [4,5], the most widespread state-of-the-art V2X simulator, due to its accurate network handling, precise protocol, and realistic traffic model implementation and capabilities of efficient full-stack V2X simulation.
Artery is an advanced V2X simulator built on top of OMNeT++ [6], a discrete-time, event-based network simulator engine, and the INET framework [7], which is also an OMNeT++ extension library responsible for the accurate protocol stack implementation, especially for the physical layer operations. Artery also features an exact ITS-G5 protocol model via the Vanetza library [8]. Vanetza is a standard-compliant protocol implementation which ensures the accuracy of the protocol model. SUMO [9] is a microscopic traffic simulator in Artery responsible for traffic simulation. This involves a number of accurate driving models and map generator tools, including support for OpenStreetMap [10].
The Artery simulator integrates the components listed above and also extends those components with additional modules. The extensions include a local object database called LocalEnvironmentModel and a sensor model, which feeds the database. The vehicles in the Artery can be configured to maintain the object database. The sensor model also supports V2X messages as object sources (Cooperative Awareness Messages, CAMs, by default and Collective Perception Messages, CPMs, with extensions of [11]) and models of physical sensors. The V2X object source practically detects a remote object if a corresponding message is received. The physical sensor model detects an object, e.g., a road user, if the related object is in the sensor’s field of view. The field of view and the onboard (in vehicle) placement of the sensor is configurable.
The remainder of this paper is structured as follows. In Section 2, some background information on the evolution of ADAS and autonomous driving applications is presented and the role of V2X in the progress is characterized. Section 3 highlights preliminaries and the most important related work of V2X-based ADAS simulations. This is followed by Section 4 where the proposed ADASApp framework is described with details of its implementation. Then, a brief comparison of the original Artery and our extended version is provided emphasizing the added values of the development. In Section 5 and Section 6, example use cases are shown which were realized as ADASApp extensions. Some distinguishing capabilities of the extended framework with simulation results are showcased. In Section 7, some concluding remarks close the paper.

2. Background

2.1. Evolution of ADAS Applications

From the very early ADAS applications, such as anti-lock braking systems (ABS) to the more complex ones using the entire repository of environment sensors, e.g., radars, lidars, ultrasonic sensors and cameras, etc., and advanced human-machine interface solutions, which are in wide use today, the technology has undergone significant development while shaping the whole vehicle industry. It proved capable of reducing associated fatalities on roads across the world and providing more comfort and assistance for vehicle users, which became a primary selling point in car markets. For a more comprehensive review and history of driving assistance systems, see, e.g., [1,12].
The human driver’s limited decision-making and control capacity and the continual striving to reduce human errors ultimately ended with the conceptual removal, or at least the partial replacement, of the human driver. This idea substantially contributed to the confirmation of the concept of self-driving car in the past decade. Later, therefore, the initial aspects which formed the evolution of ADAS were complemented by the issues created by the new requirements of automated, partially, and fully automated driving. Dependency on perception data requires using automated sensing and control functions on an ever-increasing scale. Thus, the two driving factors of ADAS development characterized above (i.e., human driver-related issues and the integration of environment detection technologies in vehicle control) are now spiraling in the research and development of the technologies of autonomous vehicles.
The newly emerging requirements of autonomous driving, however, not only foster ADAS development but also pose many new unpleasant questions. One of the most intriguing problems is that all of the driver assistance systems used and deployed recently can operate on a relatively short time horizon only and in an extremely limited scope and settings. Emergency braking kicks in at the last instant before a predictable crash, and its activation is based on local distance measurements. Lane detection comes on briefly when a car veers out of its lane, whose activation is based on imaging information taken from the next fifty meters of the road ahead. There can be ADAS operations on board that are frequently interrupted in operation and remain uncoordinated. In these situations, the human driver must be continuously alert and take over the control from the partially automated system.
The more automated the vehicle functions are, the more critical the presence of the human driver. This is because, on the one hand, human supervision is not adequate for an extended period of time. On the other hand, the constant shift between automated and human-driven operations is fraught with dangers. Air accident statistics prove that the vast majority of accidents caused by human error occur during the takeover of the autopilot. Humans cannot safely hand off the control efficiently and abruptly and counteract the limitations and deficiencies of the driver-assistance system. This must be changed drastically once the car should drive itself continuously for minutes or hours.
Critical issues in the development of the latest ADAS technologies are, therefore, how to increase their operational design domain (ODD) and how to extend the time horizon of their operation. Improving the performance and efficiency of environment sensing seems to be the right step toward finding a solution.
It is important to note that the operation of recent ADAS technologies is based on the principle of ego-centric perception, meaning that the driving environment is perceived and evaluated based on local measurements implemented from the viewpoint of a particular vehicle called the ego-vehicle. Relying on this principle is a somewhat limiting factor regarding both the size of the operational domain and the time horizon of operation. Just consider the line of sight and distance limitations of conventional sensors. Moreover, most of the risk scenarios that a continuously operating driver assistance system needs to handle can only be assessed from a higher perspective: a perspective that can represent an extended driving environment, including the characterization of co-related actions of other traffic participants.
Incorporating cooperative vehicle communication in environment sensing is essential to step out of the ego perspective. It can make it possible to use multiple redundant measurements, which are then fused and processed according to the given traffic scenario in a way other partners’ anticipated behavior, even outside of the line-of-sight zone, can be timely detected. Timely detection is a key enabler of enhanced situation awareness, increased ODD and extended time limit of operation. The following section briefly characterizes this communication technology and adds details to its role in perception.

2.2. V2X Evolution of Enhanced Perception for ADAS and Autonomous Driving

As it was introduced above, ADAS and autonomous driving technologies rely on sensor infrastructures to discover the ego vehicle’s environment. The output of these sensors is processed and appropriate in-vehicle systems respond accordingly and instantaneously. This scheme enables the vehicle to detect, e.g., a forthcoming collision event and automatically initiate a response action (such as breaking) significantly faster than a human driver would ever be capable of. Even though modern sensor solutions provide a 360-degree field of view, they still pose severe limitations. For example, it is impossible for them to see around corners or across multiple crossroads nor to warn drivers of potential hazards before they appear. Moreover, sensor systems often rely on artificial intelligence to process the data they collect, which may end up in misinterpretation of sensor inputs in certain circumstances.
Vehicle-to-Everything (V2X) communication tackles the above issues of automotive sensor architectures by implementing a novel way of perception that relies on cooperative information sharing among road users and road infrastructure. Unlike legacy ADAS applications that depend on sensors modeling human senses, the V2X-based successor techniques will take the paradigm to the next level by moving beyond the current limitations of line-of-sight capability and operating distance. In fact, V2X is considered a new generation of sensors: it is just like radar or lidar but with an active, communicating party on the other side, making V2X the only sensor with non-line-of-sight abilities. Furthermore, V2X can operate and perform perception tasks independently of any weather and lighting conditions, paving an innovative path of reliably and precisely mapping the physical environment into its digital representation.
The V2X toolset will effectively help the vehicles comprehend any situations possible on the road, playing an essential role in all four main stages of general autonomous driving procedures. First, V2X as a sensor will contribute to discovering objects, other road users, etc., by extensively sharing status, attribute, and sensor information. Second, V2X, as a pillar of cognition, will help to recognize any specific scene in the environment. For that, intention data exchange will be applied. Third, V2X will also improve autonomous driving decisions by taking part in the selection of the best maneuver the vehicle needs to take utilizing transmitted trajectory/maneuver data. Fourth, V2X will aid actuation procedures by defining how road users must behave while on the move: V2X will be used to confirm that the decided maneuver is the correct one and facilitate actuation effects using coordination data exchange.
In the above innovative application areas of V2X, an evolution regarding the use of dynamic data by ADAS and autonomous driving systems can be seen. We are moving from solutions that ensure the efficient supply of information to the human user (driver) towards the possibilities that can be realized with cooperation between autonomous vehicles. The significant development steps of this evolution are manifested in the supported use cases that we group into “Days” [13].
V2X applications implementing Day 1 use cases support awareness driving. Awareness means that the V2X protocol stack, through its specialized Facilities-layer message services [14], makes the driver and the vehicle aware of the primary status/attribute and event information relevant to the environment. This is done by receiving and processing periodic or event-driven messages from its partners’ messages. The common feature of the Day 1 applications is that they are relatively simple, and based on human drivers, so they do not interfere with driving; they only provide drivers with additional information. The most crucial Day 1 use cases include those ADAS applications that increase the cooperative awareness of road users by relying on the status and attribute information delivered by the Cooperative Awareness Messages (CAM) sent periodically (with a frequency of 1–10 Hz) based on the “see and be seen” principle [15]. These include applications that support safe passage through intersections, turning, changing lanes, and overtaking, as well as blind spot monitoring and collision warning. Cooperative Adaptive Cruise Control (C-ACC) can also be classified here because the joint implementation of acceleration/deceleration, in the direction of travel, is also based on the CAM Day 1 message service. Preparation for unexpected and potentially dangerous events is helped by the information carried in event-driven Decentralized Event Notification (DENM) messages, with which road users can warn each other about road works, weather conditions, stationary or slow vehicles, crossing prohibited signs, and different similar dangerous situations [16].
The use cases referred to as Day 1.5 refer to situations where status/attribute information is shared and used in multimodal applications. In the next step, Day 2 will lead to the era of “sensing driving” that relies on shared sensor data, expanding the possibilities of Day 1 applications in several aspects. The V2X messages communicated here no longer only provide information about sender road users, but also ensure the sharing of digital representations (objects) detected by them, thus facilitating the operation of new ADAS applications with additional capabilities. An excellent example is the Collective Perception Message (CPM) protocol and message service [17]. With the help of CPM, we can share the objects detected by the sensors and monitoring systems of the vehicle or the infrastructure and/or the free (“empty”) area between the objects (“free space”). The sensor information and detections conveyed in CPM messages can be used as input for sensor fusion procedures, which, using this additional data and running various refinement algorithms and techniques, can form a much more accurate mapping of the world surrounding the vehicle, the current characteristics of the environment, and also receive real-time sensor data from relevant areas, which may be blocked in front of their own sensors or due to limited range. With the approaches available in the Day 2 phase, there is an opportunity for data from various objects that are not fundamentally capable of V2X communication to be included in the cooperative domain and thus increase the detection area.
After the Day 2 phase, the applications of the so-called cooperative and synchronized cooperative driving will presumably become the next evolutionary steps, which will gain an increasing role with the spread of vehicles reaching more advanced levels of autonomous driving. The essential innovation foreseen in Day 3 and Day 3+ V2X applications is the introduction of the sharing of vehicle intention and coordination data. Cooperative Automated Vehicles (CAVs) will be able to exchange such information, so they can coordinate their maneuvers with other road users or the infrastructure and prevent conflicts. This includes, for example, the AGLOSA (Automated Green Light Optimized Speed Advisory) application, which can adjust the speed of controlled CAVs to the rate of phase transitions of traffic lights by actively and adaptively controlling the pace [13]. In the information exchange of the relevant messaging services, traffic participants will share their planned trajectory, with which traffic can be much more continuous and smooth than at present, and accidents caused by misinterpretation of driving intentions, or even just unnecessary slowing down, can be avoided.
The above-introduced V2X evolution and example application areas all point to the fact that ADAS and autonomous driving solutions will increasingly rely on cooperative communication, and for some, V2X will already be an absolutely essential requirement. In order to foster research in the domain, this article presents a declarative application framework for evaluating V2X-based ADAS solutions in the Artery/OMNeT++ simulator. Our scheme is based on the most crucial Day 1 and Day 2 V2X messaging services (CA, DEN, and CP) and supports architecture-level extendability towards Day 3 and beyond.

3. Related Work

Even though there are field tests and large-scale deployments already available, the development and performance evaluation of the evolving V2X technologies still strongly require extensive simulation-based experiments. There is vast literature on the simulation of V2X protocols, applications, and different V2X-based solutions: numerous simulation techniques, tools, and frameworks are available for evaluation purposes [18,19,20]. Simulation tools supporting ADAS testing and development can also be found in the literature, see, e.g., in [21,22,23,24]. However, the set of V2X simulators with models/extensions for advanced driver-assistance systems is much smaller, and when it comes to free/open-source V2X simulators with explicit ADAS support, the list becomes almost empty, at least to the best of our knowledge. Below, we present the related work by introducing the available commercial products and existing publications of V2X-capable ADAS application simulator solutions.
Vector, an automotive electronics company, supplies related industries with a professional platform of tools, software components, and services for developing embedded systems. Their product CANoe.Car2x [25] supports V2X standards and protocols from the regions of China, EU, and the US and lets users load various test scenarios and generate valid V2X communication. Authors of a Vector white paper [26] introduce that CANoe.Car2x can be applied for testing ADAS applications by generating, time synchronously, the entire data traffic for all networks used by the ADAS ECU. The system is modular, and different network interfaces, ECUs, and applications to be tested can be incorporated into the hardware-in-the-loop simulations.
ADAS iiT, a collaboration of four National Instruments (NI) Alliance partners, offers a commercial simulator for V2X and related technologies [27]. Their V2X test system solutions are designed as a modular system of different building blocks with clearly defined interfaces based on NI hardware and software, S.E.A. V2X, various communication interfaces, and GNSS simulation tools [28]. The solution is flexible: tests of single devices or complete systems in functional or hardware-in-the-loop testing applications are also achievable, especially in the V2X sensor fusion and ADAS verification domain. Intra-vehicle communication, position information management, and scenario generation are both available.
dSPACE also provides a commercial product within its V2X portfolio called the dSPACE V2X Interface for waveBEE [29]. This flexible solution supports all relevant V2X standards for V2X and cellular V2X paired with scalable computation power and makes it possible to integrate dSPACE advanced HiL- or PC-based simulation solutions for ADAS/AD into a waveBEE V2X simulation environment initially developed by Nordsys [30]. The system offers a wide range of testing capabilities for various applications, from V2X Day 1 to CAVs, that require detailed vehicle, environment, and infrastructure modeling and simulation.
In [31], the authors introduce a 3D Simulator for Cooperative ADAS and Automated Vehicles Simulator (3DCoAutoSim). It is a flexible, modular, tailored vehicle simulator with 3D visualization, enabling it to emulate various controlled driving environments for investigating the effect of automation and V2X communication on drivers. The solution does not include proper and detailed models of communication protocols or standardized V2X architectures. Instead, 3DCoAutoSim simulates the V2X transmissions within the applied 3D visualization engine by calculating the communication between the different entities while considering radio signal propagation characteristics and reception failures resulting from long distances and/or building interference [32].
The authors of [33] proposed a system integrating Veins [34] with a 3D driving simulator to support the development of next-generation ADAS systems. With a particular ego vehicle interface, they made it possible to give the control over an ego vehicle to human drivers observing the 3D environment from a first-person perspective within a VANET-enabled car. The ego vehicle is synchronized to Veins, in which DSRC communication and all related networking protocols are simulated. This allows a human driver to experience future V2X-based ADAS applications, and future V2X applications can also be tested with real users. The system focuses on the involvement of human interactions and not on modeling ADAS solutions with V2X capabilities. Moreover, it relies on Veins which, on the one hand, lacks the option to study the behavior of several vehicles with different V2X capabilities and, on the other hand, does not support ADAS applications relying on the perception of the environment through environment sensors.

4. ADASApp: An Extension of the Artery Simulator

Our goal was to implement an ADAS application development framework (called ADASApp [35]) on top of Artery/OMNeT++ to support V2X-based actions and easy and versatile configuration of the applications so that rapid prototyping of complex C-ITS systems would be possible. The framework is able to handle multiple information sources, including sensor data and V2X messages (such as CAMs, DENMs and CPMs). The application framework also features callbacks to manage the state of applications. In this system, SUMO deals with traffic modeling by relying on a rich toolset which supports a wide variety of road user types (e.g., passenger cars, trucks, tram, cyclists, motorbike riders, but also vulnerable road users and pedestrians). Miscellaneous modeling phenomena such as driver behavior, accident handling, traffic control, and weather conditions can be considered in the system through various parameters.

4.1. ADASApp Components and Operation

ADASApp is built on top of the LocalEnvironmentModel concept in the Artery framework. The Artery features a sensor model. This model defines multiple types of sensors. One of those is the CAM sensor, which subscribes to CAMs and records the received vehicle’s identifier to look up its motion data. The Artery also supports the model of classic sensors such as radars and lidars. Such sensors are modeled with the position of the sensor, its location, heading, and field of view. This model does not include sensor data generation and detection, instead, it only considers the aforementioned parameters of the sensors and filters the objects accordingly. This sensor model is sufficient for most V2X application testing scenarios. However, to better incorporate the effect of the sensor errors and detection algorithms, we plan to integrate a more accurate sensor suite in the future, as discussed in Section 7. If a sensor detects a vehicle, then the IDs of these vehicles are recorded in the LocalEnvironmentModel so that we can look up their position. Our application framework is connected dynamically to this sensor model. The application model performs the application logic calculations with a configurable frequency. In this process (as depicted in Figure 1), the application orchestrator module fires the application logic with the available objects in the LocalEnvironmentModel.
To the best of our best knowledge, there is no such extension available for Artery that supports the application development with the built-in detailed sensor model.

4.2. Filter–Effect Concept

The applications in this framework have two major components. One of them is the filters, and the other is the effects. The filters define the conditions that have to be fulfilled to trigger an application. Each application contains exactly one filter. The filters can contain zero, one, or many other filters. The filters can be combined via logical operators such as And or Or conditions. The following filters are currently implemented: True, Or, And, Not, RelativeHeading, AbsoluteHeading, SelfHeading, InRelativeBoundingBox, InGeographicalBoundingBox, SelfInGeographicalBoundingBox, Range, Speed, SelfSpeed, SpeedDifference, Acc, Named, and RWWAheadFilter. The filters could also be registered to receive DENMs, facilitated by the orchestrator module of our framework. An application can contain multiple effects. If the filter conditions of an application are met, then all effects will be updated with the particular object. Eventually, if any object matches the filter, then the trigger method of the effect is also called. Using the update method, we can effectively implement applications where the application effect depends on multiple traffic participants. Such vehicle counting use cases include platooning, for example, where we need to represent the platooning members in our internal structures. Several effects are implemented, such as the Logging, Platooning, Brake, LaneChange, and Reroute effects. The filters and effects can be combined with ADAS applications using an XML file format. This file format offers a declarative way to describe our applications. This means we have to define the circumstances where we want the application to activate and the actions we want the vehicles to take. In contrast, the imperative approach, for example, requires defining the actions on the input data.

4.3. State Handling

The proposed application framework has callbacks to handle the applications’ state. One example is the platooning application that looks for vehicles to track in the ego vehicle’s neighborhood. If it finds some, it starts to track the discovered vehicles. Tracking includes the manual speed setting and the denial of SUMO-initiated lane changes. However, if the vehicle loses the tracked vehicle, we need to reset its state so it can proceed as a regular vehicle. In order to implement this, we added a reset method both to the effect and the filter interface.

4.4. Integration of Collective Perception

The collective perception services are expected to offer increased awareness since they put unconnected road users on the V2X horizon. These services can also extend the available amount of data on the V2X network with redundant information about the traffic participants. In order to benefit from the advantages of the collective perception service, we integrated the latest CPM standard into ADASApp and implemented a CP service responsible for sending and receiving the CPMs, as an update to our previous work [11]. Due to the limited bandwidth of the V2X channel, the sender stations usually apply redundancy mitigation techniques [36]. In our implementation, we only included objects which were detected by radars, so CAMs and CPMs were not considered when assembling CPMs. The CPMs were sent at a fixed 10 Hz rate.
We also integrated the collective perception mechanisms into the Artery’s LocalEnvironmentModel. This database tracks the remote entity detections for each equipped vehicle. The detection can happen via sensors that are already existing in models of Radar and CAM-based detection mechanisms. The Radar sensor detection depends on the parameters, including the field of view, the location, and the range. The CAM sensor detection happens if a CAM is received from a remote entity via the V2X channel. The Artery tracks the station id and the SUMO id in a centralized module to provide object mapping. Instead of using the raw CAM data, the LocalEnvironmentModel uses this identity registry to provide a unified interface for the applications. This approach simplifies the environment model by eliminating the need for perception fusion on the vehicle side. Our CPM sensor detection mechanism follows a similar approach to the CAM sensor. However, perception information (i.e., detected objects) is exchanged instead of providing data about the sender vehicle. When implementing, we introduced a new optional field in the CPM because the range of the object identifier in its original format was not large enough to store those Artery-type identifiers. This data field contains the station id of the detected vehicle and has only a minor effect on communication due to its small size. On the receiver side, we used this information to add the detected object to the LocalEnvironmentModel.

4.5. Redundancy Measures

Thanks to the V2X services, especially to CA and CP, any modeled ADAS system in the simulated vehicles can rely on a largely extended amount of input data. These data can also be redundant, meaning that receivers could have information about a detected object from multiple sources. In our models, such sources can be the radar sensor of the vehicle, CAMs, or CPMs, potentially from multiple-origin vehicles. The increased amount of highly redundant data could result in more accurate detections, more precise decisions, and eventually safer and more secure applications.
In the scope of this work, we implemented a CP-aware LocalEnvironmentModel, as described in Section 4.4. The objects received from other vehicles (detected by their sensors and sent in CPMs) are fed to this module. Therefore, it is the perfect location to implement detection redundancy measurements. The ADAS application models run every 100 ms after they have sampled the database and gathered perception information. Between these samplings, we record each individual detection from each sensor source as depicted in Figure 2. Then, we record the number of perceptions for each individual application processing interval. It can be seen in Figure 2 that the first interval contained three perceptions from individual remote entities and another from the ego vehicle’s sensor. The second interval contained one remote and one local detection. By logging detections in this way, we can analyze the effects of the different V2X-based applications/services and their deployments, e.g., redundancy mitigation techniques or the object data quality and redundancy.

4.6. Comparison with the Artery Storyboard

Artery also implements a scenario definition framework called Storyboard [37], which is conceptually similar to the ADASApp proposal this article introduces. However, our solution has several advantages over the Storyboard implementation. The comparison table between these simulators can be found in Table 1. In the scope of the work, we use the LocalEnvironmentModel. Contrary to this, the Storyboard uses the raw SUMO database, which means that from the application perspective, each vehicle sees every other vehicle, regardless of its physical location or V2X capabilities. In the case of too many simulated road users, this might lead to scalability issues. With the growth of the space of the simulated area—assuming that the distribution of the simulated entities are similar and the simulated space is sufficiently large (e.g., larger than 1 square kilometers)— the required computation power for Storyboard grows in O ( n 2 ) , however, for our framework, it only grows in O ( n ) . The reason for this is that we only perform the application logic on the detected objects instead of the whole object set. Our proposal also has a more accurate sensor model, helping to examine the effects of the communication channel quality on the application performance. Our solution can also include objects from perception messages so that the ADAS applications can work on a detailed and accurate environment model. The introduction of perception services also means that with the ADASApp we can also examine the level of redundancy. This opens up the way to various studies on the safety and security of the system and the quality of the object information. Moreover, we can also investigate the impact of the V2X penetration or the number of sensors in the vehicle on the performance of ADAS applications. We can also investigate roadside sensor deployments to ensure the safety of the intersections. In contrast, the Storyboard is not capable of examining the data in their granularity. Compared to the Storyboard, our solution uses a declarative XML configuration to define the safety applications (sample configurations can be found in Appendix A and Appendix B). This is solved via a Python script in the Storyboard, which leads to numerous unnecessary runtime calls to an interpreter. Calling the Python interpreter from the simulator likely introduces performance impacts, unlike in the case of XML initialization and C++ implementation. The Storyboard’s application has a so-called effect-stack feature that deals with concurrent effects. As an advanced alternative, our framework executes all effects regardless of the other applications.

5. Implemented Example Use Cases and Configuration Details

In the scope of the work, we addressed two typical use cases of the V2X-based applications and developed a driver model for both use cases.

5.1. Event Awareness

In the Road Works Warning use case (the application concept is depicted in Figure 3), the simulation infrastructure was instructed to send DENM (Decentralized Environmental Notification) messages [16] about road works activities that are performed on a multi-lane road segment. We expect these messages to increase driver awareness, meaning that the drivers can prepare better for such situations; they can decrease speed and start lane change maneuvers in time. Eventually, this leads to fewer traffic jams and safer transportation in general.
The road works use case was modeled via a standing vehicle in SUMO (velocity is 0 or very low), as depicted in Figure 4. Using this method, the regular non-V2X vehicles could not foresee this event, which caused traffic jam events behind the standing vehicle due to the late lane changes. Using the DENM information, the V2X-capable road users could detect the road event in time. In this simplified model, as the vehicles received the DENM message, they started the lane change immediately. The scenario was simulated in both urban and highway scenarios. The showcase results of the model are discussed in Section 6.

5.2. Platooning

The platooning application (the application concept is depicted in Figure 5) helps the equipped vehicles to maintain a short leading gap to the preceding vehicle. The short gap between vehicles offers several benefits, including reduced air drag or better utilization of the roads [38]. In the scope of the platooning applications, we consider the lateral control (i.e., vehicle steering) to be solved, and we focus solely on the longitudinal control. We used our previous work [39] to select the proper control algorithm for this implementation. The gap size and the throttle control depend on the control algorithm running on the vehicles.
In the scope of this work, we did not implement any platooning-specific or management-type communication. Instead of dedicated platooning messages, we used CAMs to transfer speed, position, and other relevant dynamic data as inputs for the individual C-ACC algorithms and maintain vehicle platoons. In this example implementation, if an equipped vehicle detected a vehicle in front (practically in a bounding box) by means of CAMs or sensory data, it engaged its platooning application. This meant that the longitudinal control was passed to the platooning application instead of SUMO’s regular driver model. For this, we also prohibited the SUMO-controlled lane changes of the cars. In practice, this behavior meant that if a vehicle saw another vehicle that it could follow, it started to accelerate and then gently slowed down to follow the partner smoothly. In Figure 6, the vehicles denoted with red color are the so-called follower vehicles, and the yellow ones are either platoon leaders or vehicles not in platoons. In this work, we model the platooning application without Platooning Control Messages; however, the solution can be extended with such messages. The advantage of this solution is its simplicity, while the drawback is that we cannot explicitly control the platoon membership. For example, the vehicles will not change lanes to form a platoon.

5.3. Collision Avoidance and Redundancy Measurements in Urban Scenario Using Various Sensors

In this use case, we developed a scenario based on the car2car grid configuration set provided by the Artery framework [4]. This scenario contained a grid of 5-5 perpendicular roads modeling an urban environment, as depicted in Figure 7. The scenario run for 400 s after a 100 s warm-up period. The penetration value was set to 20%, 50%, and 80% to keep the similarity with the previous experiments.
In the scope of this measurement, we focused on the effects of the various V2X services on the number of object detections on the receiver vehicle side. We applied three different sensor setups. The number of sensor-equipped vehicles depended on the V2X penetration parameter. Only a front sensor was applied to the equipped vehicles in the first case. This sensor had a range of 80 m and a field of view angle of 60°. The vehicles were also equipped with CAM-sending capabilities and CAM sensors in the second sensor setup. The CAM sensors added the received CAMs to the object model. The third sensor setup included the CPM service and CPM sensors. The CPMs included the radar detections of the surrounding vehicles. In the scope of this scenario, we measured the level of redundancy and the overall number of detections. The number of vehicles in the simulation is presented in Figure 8.

6. Showcase Results

6.1. Parameters

As part of the initial evaluation, we ran the two example applications of our ADAS framework in an urban and highway scenario. These scenarios were generated based on real-world maps. The most crucial free parameter was the V2X penetration, which is the ratio of the traffic-equipped vehicles in the simulated environment. The measurement was performed with 0%, 20%, 50%, and 80% penetrations. The length of the simulation was normally 500 s, but it was limited to 200 s in the 80% penetration cases due to feasibility issues. The highway scenario covered a 10 km long highway section and contained around 350 vehicles during the simulation. The urban scenario covered an approximately 5 km2 urban area.
The filter and effect components of the application were introduced in Figure 3 and Figure 5. The Platooning application engaged if the speed of the ego vehicle was between 7 and 80 m s−1 and if there were remote vehicles that had a similar heading (the deviation was not more than 15 m s−1) and if the remote vehicle was in front of the ego vehicle, practically within the 4 m times 120 m bounding box. The Road Works Warning was engaged if the ego speed of the vehicle was between 3 m s−1 and 80 m s−1 and the remote vehicle was in front of the ego vehicle (100 m times 200 m bounding box).
In order to showcase the viability and performance of our system and how we can conclude the application’s performance, we collected the three most exciting diagrams from the simulation results.

6.2. Applications on a Highway

In Figure 9, we can see the effect of the platooning application on the environmental parameters in the highway scenario. The most exciting part is fuel consumption, which does not always drop with the penetration increase as we would expect. This effect occurs for two reasons. The first is that the SUMO does not maintain the air drag of the vehicles depending on the preceding vehicle, such as decreasing the achievable advantages of the platooning application. The second reason is the limitations of the used initial model of the application. If a vehicle detects a platoon leader, it immediately accelerates. These accelerations increase fuel consumption. If the number of equipped vehicles reaches a threshold, then the platoons will decrease fuel consumption due to their effect on the smoothness of the traffic.
In Figure 10, we can see how the standard deviation of the speed develops in a highway scenario depending on the number of Road Works Warning equipped devices. The standard deviation of the speed is calculated in each second from the speed of the vehicles, and the average of these values was taken. As we can see, the standard deviation drastically decreases after a short and moderate increase. This means that the deployment of the relatively simple application version will likely increase the safety of the traffic and make it more fluent. If the application is not deployed, then the vehicles form a row behind the road works, resulting in significant speed differences among the vehicles. The slight increase, in the beginning, is due to the advantage of the equipped vehicles. In this case, the equipped vehicles are able to perform a lane change rapidly, and they can overtake the standing row. For this reason, the scenario can be considered unfair. We could also observe this from the user interface of the simulation; however, the diagrams helped us quantify the results.

6.3. Applications in Urban Scenario

In Figure 11, we can see the effect of the Road Works Warning application on the average speed in the urban environment. This diagram showed a similar pattern to Figure 10. Due to the early lane change of the equipped vehicles, an unfair situation developed, where the equipped vehicles practically blocked the lane change of the other road users. If the penetration reached a certain threshold, more vehicles changed to the fast lane early, which slowed down the fast lane and gave a chance to the non-equipped vehicles to change lanes. Practically, this behavior resulted in a higher average speed in the simulated scenarios.

6.4. Redundancy in the Grid-Like Urban Scenario

In this scenario, we focused on the detection redundancy achievable via V2X services. The interpretation of the redundancy score is explained in Figure 2. The measurement results were evaluated for the various sensor setups, and penetrations are depicted in Figure 12. This diagram represents the sensor setups with different colors and the penetration values with different locations on the penetration axis. As we can see, the radar can only detect a few vehicles compared to the V2X-extended perception cases. The detections of the radar gradually grow together with the ratio of the equipped vehicles. In the second sensor setup, we can see that the number of detections is much larger than in the radar case. In this case, we can also observe a small amount of redundant object detections. However, the degree of redundancy does not exceed the value of 2. In the combined radar, CAM, and CPM sensor setup, we can see that the number of detections is even higher than in the CAM case. In rare situations, the number of redundant detections can also reach the level of 11, which means that the receiver vehicle could obtain information about an object from 11 different observations. The gap between the CAM and the CPM use case redundancy results is similarly massive as the gap between those and when only the radar sensor was enabled. The amount of the objects in the CPMs also depend on the applied redundancy mitigation techniques. The gap between the CAM and the CPM results would probably be even greater with a more dense traffic environment.
In Figure 13, the total number of object detections is represented. This diagram highlights the sensor deployment schemes with different colors, and different groups represent the different penetration values on the horizontal axis. Generally, we can see that the number of detected objects in such ADAS applications grows with the higher number of equipped vehicles. The number of objects detected by the CPMs usually exceeds the number of objects detected by CAMs. In the case of 20%, the number of objects detected by the CAMs exceeds the CPM case. This is likely due to an unfortunate setup with the random assignment of the equipped vehicles.

7. Conclusions

With the evolution of vehicle technology and the appearance of advanced C-ITS architectures, the importance of modeling and evaluation of V2X-based ADAS applications have grown. The ADAS application framework presented in this paper provides a powerful yet straightforward environment for application testing, relying on the accurate network and protocol models of Artery/OMNeT++. It can be seen from the showcase results that the system has excellent potential in evaluating advanced V2X-based ADAS use case applications. Thanks to the declarative programming interface and the flexible build-up, one can model the widest scale of ADAS solutions in the framework, in which effects of ADAS actions in various abstraction levels and scopes can be characterized, the related perception and V2X communication indicators can be collected and the findings quantified in the form of statistically relevant numerical results. The flexibility of the framework makes the simulation of V2V (e.g., emergency vehicle warning, intersection collision warning) and I2V (e.g., roadworks warning, signal violation warning) safety application scenarios possible. With further enhancements and extensions of the framework, autonomous driving use cases could be fully supported as well.
During the testing of the framework, a platooning and a road works warning use case were implemented and multiple key performance indicators, in functions of various V2X penetration levels, evaluated. The sample applications showed that novel applications, such as emergency braking, DENM generation, data aggregation, or maneuvering, can conveniently be implemented and examined using the ADASApp features.
As part of our future work, we plan to build even more realistic sensor and environment models for V2X-supported ADAS mechanisms by integrating Artery with CARLA (the “open-source simulator for autonomous driving”) so that the ADASApp-specified use cases could be examined based on practical sensor data processing pipelines with accurate object detection, decision, and fusion algorithms.

Author Contributions

Conceptualization, A.W. and L.B.; methodology, A.W. and L.B.; software, A.W.; validation, A.W.; original draft preparation, A.W., A.E. and L.B.; review and editing, A.W., A.E. and L.B.; supervision, A.E. and L.B.; project administration, L.B.; funding acquisition, L.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Cooperative Doctoral Program of the National Research, Development and Innovation Fund of Hungary (NVKDP-2021) and by the European Union project RRF-2.3.1-21-2022-00004 within the framework of the Hungarian Artificial Intelligence National Laboratory.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to express their sincere thanks to András Váradi at Commsignia for his continuous support, useful discussions and valuable comments. We would like to thank Commsignia for their support in the funding and their expertise.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
3DThree-Dimensional
ABSAnti-lock Braking Systems
ADAutonomous Driving
ADASAdvanced Driver Assistance Systems
AGLOSAAutomated Green Light Optimized Speed Advisory
C-ACCCooperative Adaptive Cruise Control
CAMCooperative Awareness Message
CAVCooperative Automated Vehicle
C-ITSCooperative Intelligent Transport Systems
CPMCollective Perception Message
DENMDecentralized Environmental Notification Message
DSRCDedicated Short-Range Communications
ECUElectronic Control Unit
GNSSGlobal Navigation Satellite System
HiLHardware-in-the-Loop
ODDOperational Design Domain
RWWRoadworks Warning
SUMOSimulation of Urban MObility
V2XVehicle-to-Everything
VANETVehicular Ad Hoc Network
XMLeXtensible Markup Language

Appendix A. Platooning Application Configuration

<?xml version="1.0" encoding="UTF-8"?>
<adasapps>
    <adasapp>
        <filter type="And">
            <filter type="SelfSpeed">
                <minSpeed>7</minSpeed>
                <maxSpeed>80</maxSpeed>
            </filter>
            <filter type="RelativeHeading">
                <fromAngle>-15</fromAngle>
                <toAngle>15</toAngle>
            </filter>
            <filter type="RWWAheadFilter">
                <pos><x>-800</x><y>0</y></pos>
                <pos><x>-800</x><y>2400</y></pos>
                <pos><x>800</x><y>2400</y></pos>
                <pos><x>800</x><y>0</y></pos>
            </filter>
        </filter>
        <effect type="LaneChange"/>
    </adasapp>
</adasapps>

Appendix B. RWW Application Configuration

<?xml version="1.0" encoding="UTF-8"?>
<adasapps>
    <adasapp>
        <filter type="And">
            <filter type="SelfSpeed">
                <minSpeed>7</minSpeed>
                <maxSpeed>80</maxSpeed>
            </filter>
            <filter type="RelativeHeading">
                <fromAngle>-15</fromAngle>
                <toAngle>15</toAngle>
            </filter>
            <filter
                type="InRelativeBoundingBox">
                <pos><x>-2</x><y>0</y></pos>
                <pos><x>-2</x><y>120</y></pos>
                <pos><x>2</x><y>120</y></pos>
                <pos><x>2</x><y>0</y></pos>
            </filter>
        </filter>
        <effect type="Platooning"/>
    </adasapp>
</adasapps>

References

  1. Jumaa, B.A.; Abdulhassan, A.M.; Abdulhassan, A.M. Advanced driver assistance system (ADAS): A review of systems and technologies. Int. J. Adv. Res. Comput. Eng. Technol. (IJARCET) 2019, 8, 231–234. [Google Scholar]
  2. Masini, B.M.; Zanella, A.; Pasolini, G.; Bazzi, A.; Zabini, F.; Andrisano, O.; Mirabella, M.; Toppan, P. Toward the Integration of ADAS Capabilities in V2X Communications for Cooperative Driving. In Proceedings of the 2020 AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE), Turin, Italy, 18–20 November 2020; pp. 1–6. [Google Scholar] [CrossRef]
  3. Wang, J.; Shao, Y.; Ge, Y.; Yu, R. A Survey of Vehicle to Everything (V2X) Testing. Sensors 2019, 19, 334. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Riebl, R.; Obermaier, C.; Günther, H.J. Artery: Large Scale Simulation Environment for ITS Applications. In Recent Advances in Network Simulation: The OMNeT++ Environment and Its Ecosystem; Springer International Publishing: Cham, Switzerland, 2019; pp. 365–406. [Google Scholar] [CrossRef]
  5. Artery. V2X Simulation Framework. Available online: http://artery.v2x-research.eu/ (accessed on 3 November 2022).
  6. Varga, A. OMNeT++. In Modeling and Tools for Network Simulation; Wehrle, K., Güneş, M., Gross, J., Eds.; Springer: Berlin/Heidelberg, 2010; pp. 35–59. [Google Scholar] [CrossRef]
  7. INET. An Open-Source OMNeT++ Model Suite. Available online: https://inet.omnetpp.org/ (accessed on 3 November 2022).
  8. Riebl, R.; Obermaier, C.; Neumeier, S.; Facchi, C. Vanetza: Boosting research on inter-vehicle communication. In Proceedings of the 5th GI/ITG KuVS Fachgespräch Inter-Vehicle Communication (FG-IVC 2017), Erlangen, Germany, 6–7 April 2017; pp. 37–40. [Google Scholar]
  9. Lopez, P.A.; Behrisch, M.; Bieker-Walz, L.; Erdmann, J.; Flötteröd, Y.P.; Hilbrich, R.; Lücken, L.; Rummel, J.; Wagner, P.; Wießner, E. Microscopic Traffic Simulation using SUMO. In Proceedings of the the 21st IEEE International Conference on Intelligent Transportation Systems, Maui, HI, USA, 4–7 November 2018. [Google Scholar]
  10. OpenStreetMap. The project that creates and distributes free geographic data for the world. Available online: https://www.openstreetmap.org (accessed on 3 November 2022).
  11. Herbert, M.; Váradi, A.; Bokor, L. Modelling and Examination of Collective Perception Service for V2X Supported Autonomous Driving. Proceedings of 11th International Conference on Applied Informatics (ICAI 2020), Eger, Hungary, 29–31 January 2020; pp. 138–149. [Google Scholar]
  12. Pathrose, P. ADAS and Automated Driving: A Practical Approach to Verification and Validation; SAE International: Warrendale, PA, USA, 2022. [Google Scholar]
  13. C2C-CC. Guidance for Day 2 and Beyond Roadmap, V1.2; CAR 2 CAR Communication Consortium: Braunschweig, Germany, 2021. [Google Scholar]
  14. ETSI EN 302 665 V1.1.1 (2010-09); Intelligent Transport Systems (ITS); Communications Architecture. ETSI: Sophia Antipolis, France, 2010.
  15. ETSI EN 302 637-2 V1.4.1, (2019-04); European Standard, Intelligent Transport Systems (ITS); Vehicular Communications; Basic Set of Applications; Part 2: Specification of Cooperative Awareness Basic Service. ETSI: Sophia Antipolis, France, 2019.
  16. ETSI EN 302 637-3 V1.3.1 (2019-04); Intelligent Transport Systems (ITS); Vehicular Communications; Basic Set of Applications; Part 3: Specifications of Decentralized Environmental Notification Basic Service. ETSI: Sophia Antipolis, France, 2019.
  17. ETSI TR 103 562 V2.1.1 (2019-12); Intelligent Transport Systems (ITS); Vehicular Communications; Basic Set of Applications; Analysis of the Collective Perception Service (CPS); Release 2. ETSI: Sophia Antipolis, France, 2019.
  18. Martinez, F.J.; Toh, C.K.; Cano, J.C.; Calafate, C.T.; Manzoni, P. A survey and comparative study of simulators for vehicular ad hoc networks (VANETs). Wirel. Commun. Mob. Comput. 2011, 11, 813–828. [Google Scholar] [CrossRef]
  19. Sommer, C.; Härri, J.; Hrizi, F.; Schünemann, B.; Dressler, F. Simulation Tools and Techniques for Vehicular Communications and Applications. In Vehicular Ad Hoc Networks: Standards, Solutions, and Research; Campolo, C., Molinaro, A., Scopigno, R., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 365–392. [Google Scholar] [CrossRef]
  20. Hejazi, H.; Bokor, L. A Survey on Simulation Efforts of 4G/LTE-based Cellular and Hybrid V2X Communications. In Proceedings of the 44th International Conference on Telecommunications and Signal Processing (TSP), Brno, Czech Republic, 26–28 July 2021; pp. 333–339. [Google Scholar] [CrossRef]
  21. Vernaza, A.; Ledezma, A.; Sanchis, A. Simul-A2: Agent-based simulator for evaluate ADA systems. In Proceedings of the 17th International Conference on Information Fusion (FUSION), Salamanca, Spain, 7–10 July 2014; pp. 1–7. [Google Scholar]
  22. Bücs, R.L.; Heistermann, M.; Leupers, R.; Ascheid, G. Multi-Scale Code Generation for Simulation-Driven Rapid ADAS Prototyping: The SMELT Approach. In Proceedings of the IEEE International Conference on Vehicular Electronics and Safety (ICVES), Madrid, Spain, 12–14 September 2018; pp. 1–8. [Google Scholar] [CrossRef]
  23. Živkovic, U.; Đekić, O.; Lukač, Ž.; Milošević, M. HIL Based Solution for ADAS Software Development and Verification. In Proceedings of the IEEE 9th International Conference on Consumer Electronics (ICCE-Berlin), Berlin, Germany, 8–11 September 2019; pp. 396–399. [Google Scholar] [CrossRef]
  24. Stević, S.; Krunić, M.; Dragojević, M.; Kaprocki, N. Development and Validation of ADAS Perception Application in ROS Environment Integrated with CARLA Simulator. In Proceedings of the 2019 27th Telecommunications Forum (TELFOR), 2019, Belgrade, Serbia, 6–27 November 2019; pp. 1–4. [Google Scholar] [CrossRef]
  25. Vector. CANoe.Car2x: Simulation, Development and Test of V2X-based Communication Applications. Available online: https://www.vector.com/int/en/products/products-a-z/software/canoe/option-car2x/ (accessed on 3 November 2022).
  26. Buttgereit, J.; Loffler, T. Vector Informatik GmbH Whitepaper: Testing V2X-Based Driver Assistance Systems. 2018. Available online: https://cdn.vector.com/cms/content/know-how/_technical-articles/Car2x_Testing_HanserAutomotive_201811_PressArticle_EN.pdf (accessed on 3 November 2022).
  27. ADAS iiT. Innovation in Test. Available online: https://www.sea-gmbh.com/en/v2x-solutions/v2x0/ (accessed on 3 November 2022).
  28. S.E.A. Datentechnik. V2X Test Systems. Available online: https://www.sea-gmbh.com/en/v2x-solutions/v2x100/ (accessed on 3 November 2022).
  29. dSPACE. V2X Interface for waveBEE. Available online: https://www.dspace.com/en/inc/home/products/sw/impsw/dspacev2xinterfacefor-waveb.cfm (accessed on 3 November 2022).
  30. Nordsys. The waveBEE V2X Product Family. Available online: https://www.keysight.com/us/en/products/wireless-network-emulators/wavebee-v2x-test-and-emulation.html (accessed on 3 November 2022).
  31. Hussein, A.; Díaz-Álvarez, A.; Armingol, J.M.; Olaverri-Monreal, C. 3DCoAutoSim: Simulator for Cooperative ADAS and Automated Vehicles. In Proceedings of the 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 3014–3019. [Google Scholar] [CrossRef]
  32. Michaeler, F.; Olaverri-Monreal, C. 3D driving simulator with VANET capabilities to assess cooperative systems: 3DSimVanet. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 999–1004. [Google Scholar] [CrossRef]
  33. Buse, D.S.; Sommer, C.; Dressler, F. Demo abstract: Integrating a driving simulator with city-scale VANET simulation for the development of next generation ADAS systems. In Proceedings of the IEEE INFOCOM 2018—IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Honolulu, HI, USA, 15–19 April 2018; pp. 1–2. [Google Scholar] [CrossRef]
  34. Sommer, C.; German, R.; Dressler, F. Bidirectionally Coupled Network and Road Traffic Simulation for Improved IVC Analysis. IEEE Trans. Mob. Comput. (TMC) 2011, 10, 3–15. [Google Scholar] [CrossRef] [Green Version]
  35. Wippelhauser, A.; Bokor, L. A Declarative Rapid Prototyping ADAS Application Framework Based on the Artery Simulator. In Proceedings of the 45th International Conference on Telecommunications and Signal Processing (TSP), Virtual Conference, 13–15 July 2022; pp. 333–337. [Google Scholar] [CrossRef]
  36. Thandavarayan, G.; Sepulcre, M.; Gozalvez, J. Analysis of message generation rules for collective perception in connected and automated driving. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 134–139. [Google Scholar]
  37. Obermaier, C.; Riebl, R.; Facchi, C. Dynamic scenario control for VANET simulations. In Proceedings of the 2017 5th IEEE International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS), Naples, Italy, 26–28 June 2017; pp. 681–686. [Google Scholar] [CrossRef]
  38. Bergenhem, C.; Shladover, S.; Coelingh, E.; Englund, C.; Tsugawa, S. Overview of platooning systems. In Proceedings of the 19th ITS World Congress, Vienna, Austria, 22–26 October 2012. [Google Scholar]
  39. Wippelhauser, A.; Bokor, L. Extensions and Usage of Veins/Plexe to Evaluate QoS Requirements of Cooperative Platooning. In Proceedings of the ICAI, Eger, Hungary, 29–31 January 2020; pp. 429–441. [Google Scholar]
Figure 1. Sequence diagram of general ADASApp operation.
Figure 1. Sequence diagram of general ADASApp operation.
Applsci 13 01392 g001
Figure 2. Redundancy score explanation. The camera icons show distinct perceptions about a particular remote vehicle with different sensor sources.
Figure 2. Redundancy score explanation. The camera icons show distinct perceptions about a particular remote vehicle with different sensor sources.
Applsci 13 01392 g002
Figure 3. Road works warning app configuration.
Figure 3. Road works warning app configuration.
Applsci 13 01392 g003
Figure 4. Urban traffic with platooning service use case.
Figure 4. Urban traffic with platooning service use case.
Applsci 13 01392 g004
Figure 5. Platooning app configuration.
Figure 5. Platooning app configuration.
Applsci 13 01392 g005
Figure 6. Highway traffic with platooning service use case.
Figure 6. Highway traffic with platooning service use case.
Applsci 13 01392 g006
Figure 7. Urban grid scenario.
Figure 7. Urban grid scenario.
Applsci 13 01392 g007
Figure 8. The number of vehicles in the simulation in the urban grid scenario.
Figure 8. The number of vehicles in the simulation in the urban grid scenario.
Applsci 13 01392 g008
Figure 9. Green gas emission and petrol consumption of highway traffic with platooning service use case.
Figure 9. Green gas emission and petrol consumption of highway traffic with platooning service use case.
Applsci 13 01392 g009
Figure 10. The standard deviation of the speed on a highway with RWW deployed.
Figure 10. The standard deviation of the speed on a highway with RWW deployed.
Applsci 13 01392 g010
Figure 11. The average speed on a highway with RWW deployed.
Figure 11. The average speed on a highway with RWW deployed.
Applsci 13 01392 g011
Figure 12. The redundancy score of the different scenarios.
Figure 12. The redundancy score of the different scenarios.
Applsci 13 01392 g012
Figure 13. The total number of detected objects in the different scenarios.
Figure 13. The total number of detected objects in the different scenarios.
Applsci 13 01392 g013
Table 1. Comparison of application simulation frameworks in Artery.
Table 1. Comparison of application simulation frameworks in Artery.
Artery’s StoryboardProposed Framework
Data sourceAll SUMO entitiesBased on the local environment model
Sensor modelAll vehicles in the simulation are visibleBased on V2X communication (CAM, DENM and CPM) and 2D sensor models
Configuration interfacePython-basedXML-based
Main conceptFilter-Effect conceptFilter-Effect concept
Effect prioritizationBased on effect stackAll effects are applied sequentially
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wippelhauser, A.; Edelmayer, A.; Bokor, L. A Declarative Application Framework for Evaluating Advanced V2X-Based ADAS Solutions. Appl. Sci. 2023, 13, 1392. https://0-doi-org.brum.beds.ac.uk/10.3390/app13031392

AMA Style

Wippelhauser A, Edelmayer A, Bokor L. A Declarative Application Framework for Evaluating Advanced V2X-Based ADAS Solutions. Applied Sciences. 2023; 13(3):1392. https://0-doi-org.brum.beds.ac.uk/10.3390/app13031392

Chicago/Turabian Style

Wippelhauser, András, András Edelmayer, and László Bokor. 2023. "A Declarative Application Framework for Evaluating Advanced V2X-Based ADAS Solutions" Applied Sciences 13, no. 3: 1392. https://0-doi-org.brum.beds.ac.uk/10.3390/app13031392

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop