sensors-logo

Journal Browser

Journal Browser

Intelligent Sensors for Smart and Autonomous Vehicles

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Vehicular Sensing".

Deadline for manuscript submissions: 7 July 2024 | Viewed by 11719

Special Issue Editors


E-Mail Website
Guest Editor
Department of Automotive Engineering and Transports, Technical University of Cluj-Napoca, 400001 Cluj-Napoca, Romania
Interests: intelligent sensors; sensor fusion in automotive applications; automotive testing; powertrain concept; energy efficiency; autonomous vehicles; computer modeling and simulation in the automotive field

E-Mail Website
Guest Editor
Department of Automotive Engineering and Transports, Technical University of Cluj-Napoca Romania, 400114 Cluj-Napoca, Romania
Interests: electric vehicles; fuel cell vehicles; powertrain concept; electronic control unit; in-vehicle communication network; energy efficiency; autonomous vehicles; computer modeling and simulation in the automotive field
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Automotive Technologies, BME Faculty of Transportation Engineering and Vehicle Engineering, 1111 Budapest, Hungary
Interests: legislation of autonomous vehicles; trajectory planning in roundabouts; refueling prediction of autonomous vehicles; drivetrains

Special Issue Information

Dear Colleagues,

Autonomous vehicles (AV) must be intelligent. These, in addition to the functional algorithm programmed in the autonomous driving system, must decide for themselves at any time the next actions of the vehicle, including in safety-critical situations. In order to make these decisions which are going to take AVs toward levels 4 and 5 of driving automation (as they are defined, SAE 3016), these vehicles must have an intelligent perception system. Tasks of localization, perception, prediction, and planning of the vehicle’s current and future actions are based on the data harvested by intelligent sensors from their environment. Providing up-to-date information regarding the capabilities and limits, advantages, and disadvantages of intelligent sensors for AV would be useful for the entire community involved in the development of AVs, thus making AVs more reliable, safer, and more robust.

In this Special Issue on “Intelligent Sensors for Smart and Autonomous Vehicles” – the authors can contribute to the development of AVs by publishing Open Access papers related to intelligent sensorial systems for AV (RADAR, LIDAR, cameras, ultrasonic, GPS/GNSS, V2V etc.), fusion algorithms for harvested data by intelligent sensors, object classification techniques and mechanisms, etc. We invite authors interested in the proposed topics to contribute to this Special Issue by publishing their results of research related, but not limited, to the following topics: intelligent sensorial systems for autonomous driving; paradigms, concepts, and architectures for intelligent sensorial systems; real models vs. virtual models for intelligent sensorial systems; sensor integration and fusion for autonomous driving; intelligent proprioceptive and exteroceptive sensors; object classification by intelligent sensors; artificial intelligence algorithms; fog and edge computing for autonomous driving; cybersecurity for intelligent sensorial systems applications based on intelligent sensorial systems; the role of intelligent sensors in V2X communication.

Prof. Dr. István Barabás
Dr. Calin Iclodean
Dr. Máté Zöldy
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent sensors
  • optimized sensors
  • sensors fusion
  • sensor integration
  • imaging sensors
  • range sensors
  • inertial sensors
  • autonomous driving
  • artificial intelligence
  • machine learning
  • deep learning
  • big data processing
  • virtual reality
  • cloud computing
  • edge computing
  • fog computing

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 7347 KiB  
Article
Driver Drowsiness Multi-Method Detection for Vehicles with Autonomous Driving Functions
by Horia Beles, Tiberiu Vesselenyi, Alexandru Rus, Tudor Mitran, Florin Bogdan Scurt and Bogdan Adrian Tolea
Sensors 2024, 24(5), 1541; https://0-doi-org.brum.beds.ac.uk/10.3390/s24051541 - 28 Feb 2024
Viewed by 743
Abstract
The article outlines various approaches to developing a fuzzy decision algorithm designed for monitoring and issuing warnings about driver drowsiness. This algorithm is based on analyzing EOG (electrooculography) signals and eye state images with the aim of preventing accidents. The drowsiness warning system [...] Read more.
The article outlines various approaches to developing a fuzzy decision algorithm designed for monitoring and issuing warnings about driver drowsiness. This algorithm is based on analyzing EOG (electrooculography) signals and eye state images with the aim of preventing accidents. The drowsiness warning system comprises key components that learn about, analyze and make decisions regarding the driver’s alertness status. The outcomes of this analysis can then trigger warnings if the driver is identified as being in a drowsy state. Driver drowsiness is characterized by a gradual decline in attention to the road and traffic, diminishing driving skills and an increase in reaction time, all contributing to a higher risk of accidents. In cases where the driver does not respond to the warnings, the ADAS (advanced driver assistance systems) system should intervene, assuming control of the vehicle’s commands. Full article
(This article belongs to the Special Issue Intelligent Sensors for Smart and Autonomous Vehicles)
Show Figures

Figure 1

20 pages, 14985 KiB  
Article
Point Cloud Painting for 3D Object Detection with Camera and Automotive 3+1D RADAR Fusion
by Santiago Montiel-Marín, Ángel Llamazares, Miguel Antunes, Pedro A. Revenga and Luis M. Bergasa
Sensors 2024, 24(4), 1244; https://0-doi-org.brum.beds.ac.uk/10.3390/s24041244 - 15 Feb 2024
Viewed by 772
Abstract
RADARs and cameras have been present in automotives since the advent of ADAS, as they possess complementary strengths and weaknesses but have been underlooked in the context of learning-based methods. In this work, we propose a method to perform object detection in autonomous [...] Read more.
RADARs and cameras have been present in automotives since the advent of ADAS, as they possess complementary strengths and weaknesses but have been underlooked in the context of learning-based methods. In this work, we propose a method to perform object detection in autonomous driving based on a geometrical and sequential sensor fusion of 3+1D RADAR and semantics extracted from camera data through point cloud painting from the perspective view. To achieve this objective, we adapt PointPainting from the LiDAR and camera domains to the sensors mentioned above. We first apply YOLOv8-seg to obtain instance segmentation masks and project their results to the point cloud. As a refinement stage, we design a set of heuristic rules to minimize the propagation of errors from the segmentation to the detection stage. Our pipeline concludes by applying PointPillars as an object detection network to the painted RADAR point cloud. We validate our approach in the novel View of Delft dataset, which includes 3+1D RADAR data sequences in urban environments. Experimental results show that this fusion is also suitable for RADAR and cameras as we obtain a significant improvement over the RADAR-only baseline, increasing mAP from 41.18 to 52.67 (+27.9%). Full article
(This article belongs to the Special Issue Intelligent Sensors for Smart and Autonomous Vehicles)
Show Figures

Figure 1

28 pages, 35421 KiB  
Article
YOLOv7-TS: A Traffic Sign Detection Model Based on Sub-Pixel Convolution and Feature Fusion
by Shan Zhao, Yang Yuan, Xuan Wu, Yunlei Wang and Fukai Zhang
Sensors 2024, 24(3), 989; https://0-doi-org.brum.beds.ac.uk/10.3390/s24030989 - 03 Feb 2024
Cited by 1 | Viewed by 945
Abstract
In recent years, significant progress has been witnessed in the field of deep learning-based object detection. As a subtask in the field of object detection, traffic sign detection has great potential for development. However, the existing object detection methods for traffic sign detection [...] Read more.
In recent years, significant progress has been witnessed in the field of deep learning-based object detection. As a subtask in the field of object detection, traffic sign detection has great potential for development. However, the existing object detection methods for traffic sign detection in real-world scenes are plagued by issues such as the omission of small objects and low detection accuracies. To address these issues, a traffic sign detection model named YOLOv7-Traffic Sign (YOLOv7-TS) is proposed based on sub-pixel convolution and feature fusion. Firstly, the up-sampling capability of the sub-pixel convolution integrating channel dimension is harnessed and a Feature Map Extraction Module (FMEM) is devised to mitigate the channel information loss. Furthermore, a Multi-feature Interactive Fusion Network (MIFNet) is constructed to facilitate enhanced information interaction among all feature layers, improving the feature fusion effectiveness and strengthening the perception ability of small objects. Moreover, a Deep Feature Enhancement Module (DFEM) is established to accelerate the pooling process while enriching the highest-layer feature. YOLOv7-TS is evaluated on two traffic sign datasets, namely CCTSDB2021 and TT100K. Compared with YOLOv7, YOLOv7-TS, with a smaller number of parameters, achieves a significant enhancement of 3.63% and 2.68% in the mean Average Precision (mAP) for each respective dataset, proving the effectiveness of the proposed model. Full article
(This article belongs to the Special Issue Intelligent Sensors for Smart and Autonomous Vehicles)
Show Figures

Figure 1

16 pages, 6187 KiB  
Article
Identification of Driver Status Hazard Level and the System
by Jiayuan Gong, Shiwei Zhou and Wenbo Ren
Sensors 2023, 23(17), 7536; https://0-doi-org.brum.beds.ac.uk/10.3390/s23177536 - 30 Aug 2023
Viewed by 685
Abstract
According to the survey statistics, most traffic accidents are caused by the driver’s behavior and status irregularities. Because there is no multi-level dangerous state grading system at home and abroad, this paper proposes a complex state grading system for real-time detection and dynamic [...] Read more.
According to the survey statistics, most traffic accidents are caused by the driver’s behavior and status irregularities. Because there is no multi-level dangerous state grading system at home and abroad, this paper proposes a complex state grading system for real-time detection and dynamic tracking of the driver’s state. The system uses OpenMV as the acquisition camera combined with the cradle head tracking system to collect the driver’s current driving image in real-time dynamically, combines the YOLOX algorithm with the OpenPose algorithm to judge the driver’s dangerous driving behavior by detecting unsafe objects in the cab and the driver’s posture, and combines the improved Retinaface face detection algorithm with the Dlib feature-point algorithm to discriminate the fatigue driving state of the driver. The experimental results show that the accuracy of the three driver danger levels (R1, R2, and R3) obtained by the proposed system reaches 95.8%, 94.5%, and 96.3%, respectively. The experimental results of this system have a specific practical significance in driver-distracted driving warnings. Full article
(This article belongs to the Special Issue Intelligent Sensors for Smart and Autonomous Vehicles)
Show Figures

Figure 1

15 pages, 9619 KiB  
Article
Cyclist Orientation Estimation Using LiDAR Data
by Hyoungwon Chang, Yanlei Gu, Igor Goncharenko, Li-Ta Hsu and Chinthaka Premachandra
Sensors 2023, 23(6), 3096; https://0-doi-org.brum.beds.ac.uk/10.3390/s23063096 - 14 Mar 2023
Viewed by 1745
Abstract
It is crucial for an autonomous vehicle to predict cyclist behavior before decision-making. When a cyclist is on real traffic roads, his or her body orientation indicates the current moving directions, and his or her head orientation indicates his or her intention for [...] Read more.
It is crucial for an autonomous vehicle to predict cyclist behavior before decision-making. When a cyclist is on real traffic roads, his or her body orientation indicates the current moving directions, and his or her head orientation indicates his or her intention for checking the road situation before making next movement. Therefore, estimating the orientation of cyclist’s body and head is an important factor of cyclist behavior prediction for autonomous driving. This research proposes to estimate cyclist orientation including both body and head orientation using deep neural network with the data from Light Detection and Ranging (LiDAR) sensor. In this research, two different methods are proposed for cyclist orientation estimation. The first method uses 2D images to represent the reflectivity, ambient and range information collected by LiDAR sensor. At the same time, the second method uses 3D point cloud data to represent the information collected from LiDAR sensor. The two proposed methods adopt a model ResNet50, which is a 50-layer convolutional neural network, for orientation classification. Hence, the performances of two methods are compared to achieve the most effective usage of LiDAR sensor data in cyclist orientation estimation. This research developed a cyclist dataset, which includes multiple cyclists with different body and head orientations. The experimental results showed that a model that uses 3D point cloud data has better performance for cyclist orientation estimation compared to the model that uses 2D images. Moreover, in the 3D point cloud data-based method, using reflectivity information has a more accurate estimation result than using ambient information. Full article
(This article belongs to the Special Issue Intelligent Sensors for Smart and Autonomous Vehicles)
Show Figures

Figure 1

21 pages, 1454 KiB  
Article
Design and Calibration of Plane Mirror Setups for Mobile Robots with a 2D-Lidar
by James E. Kibii, Andreas Dreher, Paul L. Wormser and Hartmut Gimpel
Sensors 2022, 22(20), 7830; https://0-doi-org.brum.beds.ac.uk/10.3390/s22207830 - 15 Oct 2022
Viewed by 1651
Abstract
Lidar sensors are widely used for environmental perception on autonomous robot vehicles (ARV). The field of view (FOV) of Lidar sensors can be reshaped by positioning plane mirrors in their vicinity. Mirror setups can especially improve the FOV for ground detection of ARVs [...] Read more.
Lidar sensors are widely used for environmental perception on autonomous robot vehicles (ARV). The field of view (FOV) of Lidar sensors can be reshaped by positioning plane mirrors in their vicinity. Mirror setups can especially improve the FOV for ground detection of ARVs with 2D-Lidar sensors. This paper presents an overview of several geometric designs and their strengths for certain vehicle types. Additionally, a new and easy-to-implement calibration procedure for setups of 2D-Lidar sensors with mirrors is presented to determine precise mirror orientations and positions, using a single flat calibration object with a pre-aligned simple fiducial marker. Measurement data from a prototype vehicle with a 2D-Lidar with a 2 m range using this new calibration procedure are presented. We show that the calibrated mirror orientations are accurate to less than 0.6° in this short range, which is a significant improvement over the orientation angles taken directly from the CAD. The accuracy of the point cloud data improved, and no significant decrease in distance noise was introduced. We deduced general guidelines for successful calibration setups using our method. In conclusion, a 2D-Lidar sensor and two plane mirrors calibrated with this method are a cost-effective and accurate way for robot engineers to improve the environmental perception of ARVs. Full article
(This article belongs to the Special Issue Intelligent Sensors for Smart and Autonomous Vehicles)
Show Figures

Figure 1

19 pages, 1050 KiB  
Article
Quantifying the Foregone Benefits of Intelligent Speed Assist Due to the Limited Availability of Speed Signs across Three Australian States
by Sujanie Peiris, Stuart Newstead, Janneke Berecki-Gisolf and Brian Fildes
Sensors 2022, 22(20), 7765; https://0-doi-org.brum.beds.ac.uk/10.3390/s22207765 - 13 Oct 2022
Viewed by 1717
Abstract
By being able to communicate the speed limit to drivers using speed sign recognition cameras, Intelligent Speed Assist (ISA) is expected to bring significant road safety gains through increased speed compliance. In the absence of complete digital speed maps and due to limited [...] Read more.
By being able to communicate the speed limit to drivers using speed sign recognition cameras, Intelligent Speed Assist (ISA) is expected to bring significant road safety gains through increased speed compliance. In the absence of complete digital speed maps and due to limited cellular connectivity throughout Australia, this study estimated the forgone savings of ISA in the event that speed signs are solely relied upon for optimal advisory ISA function. First, speed-related fatalities and serious injuries (FSI) in the Australian states of Victoria, South Australia, and Queensland (2013–2018) were identified, and published effectiveness estimates of ISA were applied to determine the potential benefits of ISA. Subsequently, taking into account speed sign presence across the three states, the forgone savings of ISA were estimated as FSI that would not be prevented due to absent speed signage. Annually, 27–35% of speed-related FSI in each state are unlikely to be prevented by ISA because speed sign infrastructure is absent, equating to economic losses of between AUD 62 and 153 million. Despite a number of assumptions being made regarding ISA fitment and driver acceptance of the technology, conservative estimates suggest that the benefits of speed signs placed consistently across road classes and remoteness levels would far outweigh the costs expected from the absence of speed signs. The development and utilisation of a methodology for estimating the foregone benefits of ISA due to suboptimal road infrastructure constitutes a novel contribution to research. This work provides a means of identifying where infrastructure investments should be targeted to capitalise on benefits offered by advanced driver assist technologies. Full article
(This article belongs to the Special Issue Intelligent Sensors for Smart and Autonomous Vehicles)
Show Figures

Figure 1

Review

Jump to: Research

23 pages, 5569 KiB  
Review
Recent Developments on Drivable Area Estimation: A Survey and a Functional Analysis
by Juan Luis Hortelano, Jorge Villagrá, Jorge Godoy and Víctor Jiménez
Sensors 2023, 23(17), 7633; https://0-doi-org.brum.beds.ac.uk/10.3390/s23177633 - 03 Sep 2023
Viewed by 1421
Abstract
Most advanced autonomous driving systems (ADS) today rely on the prior creation of high-definition maps (HD maps). This process is expensive and needs to be performed frequently to keep up with the changing conditions of the road environment. Creating accurate navigation maps online [...] Read more.
Most advanced autonomous driving systems (ADS) today rely on the prior creation of high-definition maps (HD maps). This process is expensive and needs to be performed frequently to keep up with the changing conditions of the road environment. Creating accurate navigation maps online is an alternative to reduce the cost and broaden the current operational design domains (ODD) of modern ADS. This paper offers a snapshot of the state of the art in drivable area estimation, which is an essential technology to deploy ADS in ODDs where HD maps are limited or unavailable. The proposed review introduces a novel architecture breakdown that fits learning-based and non-learning-based techniques and allows the analysis of a set of impactful and recent drivable area algorithms. In addition to that, complimentary information for practitioners is provided: (i) an assessment of the influence of modern sensing technologies on the task under study and (ii) a selection of relevant datasets for evaluation and benchmarking purposes. Full article
(This article belongs to the Special Issue Intelligent Sensors for Smart and Autonomous Vehicles)
Show Figures

Figure 1

Back to TopTop