sensors-logo

Journal Browser

Journal Browser

Application of Deep Learning in Intelligent Transportation

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Vehicular Sensing".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 61227

Special Issue Editors


E-Mail Website
Guest Editor
Department Mechanical Engineering, University of Duisburg-Essen, Forsthausweg 2, 47057 Duisburg, Germany
Interests: Vehicle dynamics; driver assistance systems; vehicle simulators

E-Mail Website
Guest Editor
Faculty of Engineering, University of Duisburg-Essen, 47057 Duisburg, Germany
Interests: Intelligent Transportation Systems (ITS); Vehicle Dynamics; Machine Learning; Hybride Methoden und Ansätze; Applied Artificial Intelligence

Special Issue Information

Dear Colleagues,

For some time now, deep learning has been a promising approach for the study of various systems, and this is also the case in the field of intelligent transportation systems. Due to their multi-layered structure, deep networks outperform classical machine learning approaches in processing data and learning correlations. This advancement is also apparent for the purpose of intelligent transportation systems. Applications include navigation and localization, signal and image processing, connected and automated vehicles, as well as virtual sensors, among many others.

This Special Issue encourages authors from academia and industry to submit new research results about technological innovations and novel ideas for the application of deep learning in intelligent transportation systems.

Prof. Dr. Dieter Schramm
Dr. Philipp Sieberg
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • advanced driver assistance systems
  • artificial neural networks
  • connected and automated vehicles
  • control systems
  • deep learning
  • intelligent transportation systems
  • navigation and localization
  • signal and image processing
  • virtual sensors

Published Papers (21 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 2281 KiB  
Article
Transformers for Multi-Horizon Forecasting in an Industry 4.0 Use Case
by Stanislav Vakaruk, Amit Karamchandani, Jesús Enrique Sierra-García, Alberto Mozo, Sandra Gómez-Canaval and Antonio Pastor
Sensors 2023, 23(7), 3516; https://0-doi-org.brum.beds.ac.uk/10.3390/s23073516 - 27 Mar 2023
Cited by 4 | Viewed by 1713
Abstract
Recently, a novel approach in the field of Industry 4.0 factory operations was proposed for a new generation of automated guided vehicles (AGVs) that are connected to a virtualized programmable logic controller (PLC) via a 5G multi-access edge-computing (MEC) platform to enable remote [...] Read more.
Recently, a novel approach in the field of Industry 4.0 factory operations was proposed for a new generation of automated guided vehicles (AGVs) that are connected to a virtualized programmable logic controller (PLC) via a 5G multi-access edge-computing (MEC) platform to enable remote control. However, this approach faces a critical challenge as the 5G network may encounter communication disruptions that can lead to AGV deviations and, with this, potential safety risks and workplace issues. To mitigate this problem, several works have proposed the use of fixed-horizon forecasting techniques based on deep-learning models that can anticipate AGV trajectory deviations and take corrective maneuvers accordingly. However, these methods have limited prediction flexibility for the AGV operator and are not robust against network instability. To address this limitation, this study proposes a novel approach based on multi-horizon forecasting techniques to predict the deviation of remotely controlled AGVs. As its primary contribution, the work presents two new versions of the state-of-the-art transformer architecture that are well-suited to the multi-horizon prediction problem. We conduct a comprehensive comparison between the proposed models and traditional deep-learning models, such as the long short-term memory (LSTM) neural network, to evaluate the performance and capabilities of the proposed models in relation to traditional deep-learning architectures. The results indicate that (i) the transformer-based models outperform LSTM in both multi-horizon and fixed-horizon scenarios, (ii) the prediction accuracy at a specific time-step of the best multi-horizon forecasting model is very close to that obtained by the best fixed-horizon forecasting model at the same step, (iii) models that use a time-sequence structure in their inputs tend to perform better in multi-horizon scenarios compared to their fixed horizon counterparts and other multi-horizon models that do not consider a time topology in their inputs, and (iv) our experiments showed that the proposed models can perform inference within the required time constraints for real-time decision making. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

15 pages, 5481 KiB  
Article
Road User Position and Speed Estimation via Deep Learning from Calibrated Fisheye Videos
by Yves Berviller, Masoomeh Shireen Ansarnia, Etienne Tisserand, Patrick Schweitzer and Alain Tremeau
Sensors 2023, 23(5), 2637; https://0-doi-org.brum.beds.ac.uk/10.3390/s23052637 - 27 Feb 2023
Viewed by 1832
Abstract
In this paper, we present a deep learning processing flow aimed at Advanced Driving Assistance Systems (ADASs) for urban road users. We use a fine analysis of the optical setup of a fisheye camera and present a detailed procedure to obtain Global Navigation [...] Read more.
In this paper, we present a deep learning processing flow aimed at Advanced Driving Assistance Systems (ADASs) for urban road users. We use a fine analysis of the optical setup of a fisheye camera and present a detailed procedure to obtain Global Navigation Satellite System (GNSS) coordinates along with the speed of the moving objects. The camera to world transform incorporates the lens distortion function. YOLOv4, re-trained with ortho-photographic fisheye images, provides road user detection. All the information extracted from the image by our system represents a small payload and can easily be broadcast to the road users. The results show that our system is able to properly classify and localize the detected objects in real time, even in low-light-illumination conditions. For an effective observation area of 20 m × 50 m, the error of the localization is in the order of one meter. Although an estimation of the velocities of the detected objects is carried out by offline processing with the FlowNet2 algorithm, the accuracy is quite good, with an error below one meter per second for urban speed range (0 to 15 m/s). Moreover, the almost ortho-photographic configuration of the imaging system ensures that the anonymity of all street users is guaranteed. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

26 pages, 1447 KiB  
Article
FPGA-Based Vehicle Detection and Tracking Accelerator
by Jiaqi Zhai, Bin Li, Shunsen Lv and Qinglei Zhou
Sensors 2023, 23(4), 2208; https://0-doi-org.brum.beds.ac.uk/10.3390/s23042208 - 16 Feb 2023
Cited by 6 | Viewed by 3481
Abstract
A convolutional neural network-based multiobject detection and tracking algorithm can be applied to vehicle detection and traffic flow statistics, thus enabling smart transportation. Aiming at the problems of the high computational complexity of multiobject detection and tracking algorithms, a large number of model [...] Read more.
A convolutional neural network-based multiobject detection and tracking algorithm can be applied to vehicle detection and traffic flow statistics, thus enabling smart transportation. Aiming at the problems of the high computational complexity of multiobject detection and tracking algorithms, a large number of model parameters, and difficulty in achieving high throughput with a low power consumption in edge devices, we design and implement a low-power, low-latency, high-precision, and configurable vehicle detector based on a field programmable gate array (FPGA) with YOLOv3 (You-Only-Look-Once-version3), YOLOv3-tiny CNNs (Convolutional Neural Networks), and the Deepsort algorithm. First, we use a dynamic threshold structured pruning method based on a scaling factor to significantly compress the detection model size on the premise that the accuracy does not decrease. Second, a dynamic 16-bit fixed-point quantization algorithm is used to quantify the network parameters to reduce the memory occupation of the network model. Furthermore, we generate a reidentification (RE-ID) dataset from the UA-DETRAC dataset and train the appearance feature extraction network on the Deepsort algorithm to improve the vehicles’ tracking performance. Finally, we implement hardware optimization techniques such as memory interlayer multiplexing, parameter rearrangement, ping-pong buffering, multichannel transfer, pipelining, Im2col+GEMM, and Winograd algorithms to improve resource utilization and computational efficiency. The experimental results demonstrate that the compressed YOLOv3 and YOLOv3-tiny network models decrease in size by 85.7% and 98.2%, respectively. The dual-module parallel acceleration meets the demand of the 6-way parallel video stream vehicle detection with the peak throughput at 168.72 fps. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

31 pages, 2918 KiB  
Article
Wireless Link Selection Methods for Maritime Communication Access Networks—A Deep Learning Approach
by Michal Hoeft, Krzysztof Gierlowski and Jozef Wozniak
Sensors 2023, 23(1), 400; https://0-doi-org.brum.beds.ac.uk/10.3390/s23010400 - 30 Dec 2022
Cited by 2 | Viewed by 1813
Abstract
In recent years, we have been witnessing a growing interest in the subject of communication at sea. One of the promising solutions to enable widespread access to data transmission capabilities in coastal waters is the possibility of employing an on-shore wireless access infrastructure. [...] Read more.
In recent years, we have been witnessing a growing interest in the subject of communication at sea. One of the promising solutions to enable widespread access to data transmission capabilities in coastal waters is the possibility of employing an on-shore wireless access infrastructure. However, such an infrastructure is a heterogeneous one, managed by many independent operators and utilizing a number of different communication technologies. If a moving sea vessel is to maintain a reliable communication within such a system, it needs to employ a set of network mechanisms dedicated for this purpose. In this paper, we provide a short overview of such requirements and overall characteristics of maritime communication, but our main focus is on the link selection procedure—an element of critical importance for the process of changing the device/system which the mobile vessel uses to retain communication with on-shore networks. The paper presents the concept of employing deep neural networks for the purpose of link selection. The proposed methods have been verified using propagation models dedicated to realistically represent the environment of maritime communications and compared to a number of currently popular solutions. The results of evaluation indicate a significant gain in both accuracy of predictions and reduction of the amount of test traffic which needs to be generated for measurements. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

20 pages, 6824 KiB  
Article
Unusual Driver Behavior Detection in Videos Using Deep Learning Models
by Hamad Ali Abosaq, Muhammad Ramzan, Faisal Althobiani, Adnan Abid, Khalid Mahmood Aamir, Hesham Abdushkour, Muhammad Irfan, Mohammad E. Gommosani, Saleh Mohammed Ghonaim, V. R. Shamji and Saifur Rahman
Sensors 2023, 23(1), 311; https://0-doi-org.brum.beds.ac.uk/10.3390/s23010311 - 28 Dec 2022
Cited by 5 | Viewed by 3388
Abstract
Anomalous driving behavior detection is becoming more popular since it is vital in ensuring the safety of drivers and passengers in vehicles. Road accidents happen for various reasons, including health, mental stress, and fatigue. It is critical to monitor abnormal driving behaviors in [...] Read more.
Anomalous driving behavior detection is becoming more popular since it is vital in ensuring the safety of drivers and passengers in vehicles. Road accidents happen for various reasons, including health, mental stress, and fatigue. It is critical to monitor abnormal driving behaviors in real time to improve driving safety, raise driver awareness of their driving patterns, and minimize future road accidents. Many symptoms appear to show this condition in the driver, such as facial expressions or abnormal actions. The abnormal activity was among the most common causes of road accidents, accounting for nearly 20% of all accidents, according to international data on accident causes. To avoid serious consequences, abnormal driving behaviors must be identified and avoided. As it is difficult to monitor anyone continuously, automated detection of this condition is more effective and quicker. To increase drivers’ recognition of their driving behaviors and prevent potential accidents, a precise monitoring approach that detects abnormal driving behaviors and identifies abnormal driving behaviors is required. The most common activities performed by the driver while driving is drinking, eating, smoking, and calling. These types of driver activities are considered in this work, along with normal driving. This study proposed deep learning-based detection models for recognizing abnormal driver actions. This system is trained and tested using a newly created dataset, including five classes. The main classes include Driver-smoking, Driver-eating, Driver-drinking, Driver-calling, and Driver-normal. For the analysis of results, pre-trained and fine-tuned CNN models are considered. The proposed CNN-based model and pre-trained models ResNet101, VGG-16, VGG-19, and Inception-v3 are used. The results are compared by using the performance measures. The results are obtained 89%, 93%, 93%, 94% for pre-trained models and 95% by using the proposed CNN-based model. Our analysis and results revealed that our proposed CNN base model performed well and could effectively classify the driver’s abnormal behavior. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

20 pages, 2246 KiB  
Article
A Heterogeneous Ensemble Approach for Travel Time Prediction Using Hybridized Feature Spaces and Support Vector Regression
by Jawad-ur-Rehman Chughtai, Irfan ul Haq, Saif ul Islam and Abdullah Gani
Sensors 2022, 22(24), 9735; https://0-doi-org.brum.beds.ac.uk/10.3390/s22249735 - 12 Dec 2022
Cited by 2 | Viewed by 1400
Abstract
Travel time prediction is essential to intelligent transportation systems directly affecting smart cities and autonomous vehicles. Accurately predicting traffic based on heterogeneous factors is highly beneficial but remains a challenging problem. The literature shows significant performance improvements when traditional machine learning and deep [...] Read more.
Travel time prediction is essential to intelligent transportation systems directly affecting smart cities and autonomous vehicles. Accurately predicting traffic based on heterogeneous factors is highly beneficial but remains a challenging problem. The literature shows significant performance improvements when traditional machine learning and deep learning models are combined using an ensemble learning approach. This research mainly contributes by proposing an ensemble learning model based on hybridized feature spaces obtained from a bidirectional long short-term memory module and a bidirectional gated recurrent unit, followed by support vector regression to produce the final travel time prediction. The proposed approach consists of three stages–initially, six state-of-the-art deep learning models are applied to traffic data obtained from sensors. Then the feature spaces and decision scores (outputs) of the model with the highest performance are fused to obtain hybridized deep feature spaces. Finally, a support vector regressor is applied to the hybridized feature spaces to get the final travel time prediction. The performance of our proposed heterogeneous ensemble using test data showed significant improvements compared to the baseline techniques in terms of the root mean square error (53.87±3.50), mean absolute error (12.22±1.35) and the coefficient of determination (0.99784±0.00019). The results demonstrated that the hybridized deep feature space concept could produce more stable and superior results than the other baseline techniques. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

22 pages, 967 KiB  
Article
Embedding Weather Simulation in Auto-Labelling Pipelines Improves Vehicle Detection in Adverse Conditions
by George Broughton, Jiří Janota, Jan Blaha, Tomáš Rouček, Maxim Simon, Tomáš Vintr, Tao Yang, Zhi Yan and Tomáš Krajník
Sensors 2022, 22(22), 8855; https://0-doi-org.brum.beds.ac.uk/10.3390/s22228855 - 16 Nov 2022
Cited by 3 | Viewed by 1328
Abstract
The performance of deep learning-based detection methods has made them an attractive option for robotic perception. However, their training typically requires large volumes of data containing all the various situations the robots may potentially encounter during their routine operation. Thus, the workforce required [...] Read more.
The performance of deep learning-based detection methods has made them an attractive option for robotic perception. However, their training typically requires large volumes of data containing all the various situations the robots may potentially encounter during their routine operation. Thus, the workforce required for data collection and annotation is a significant bottleneck when deploying robots in the real world. This applies especially to outdoor deployments, where robots have to face various adverse weather conditions. We present a method that allows an independent car tansporter to train its neural networks for vehicle detection without human supervision or annotation. We provide the robot with a hand-coded algorithm for detecting cars in LiDAR scans in favourable weather conditions and complement this algorithm with a tracking method and a weather simulator. As the robot traverses its environment, it can collect data samples, which can be subsequently processed into training samples for the neural networks. As the tracking method is applied offline, it can exploit the detections made both before the currently processed scan and any subsequent future detections of the current scene, meaning the quality of annotations is in excess of those of the raw detections. Along with the acquisition of the labels, the weather simulator is able to alter the raw sensory data, which are then fed into the neural network together with the labels. We show how this pipeline, being run in an offline fashion, can exploit off-the-shelf weather simulation for the auto-labelling training scheme in a simulator-in-the-loop manner. We show how such a framework produces an effective detector and how the weather simulator-in-the-loop is beneficial for the robustness of the detector. Thus, our automatic data annotation pipeline significantly reduces not only the data annotation but also the data collection effort. This allows the integration of deep learning algorithms into existing robotic systems without the need for tedious data annotation and collection in all possible situations. Moreover, the method provides annotated datasets that can be used to develop other methods. To promote the reproducibility of our research, we provide our datasets, codes and models online. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

21 pages, 17102 KiB  
Article
Framework for Vehicle Make and Model Recognition—A New Large-Scale Dataset and an Efficient Two-Branch–Two-Stage Deep Learning Architecture
by Yangxintong Lyu, Ionut Schiopu, Bruno Cornelis and Adrian Munteanu
Sensors 2022, 22(21), 8439; https://0-doi-org.brum.beds.ac.uk/10.3390/s22218439 - 02 Nov 2022
Cited by 3 | Viewed by 3279
Abstract
In recent years, Vehicle Make and Model Recognition (VMMR) has attracted a lot of attention as it plays a crucial role in Intelligent Transportation Systems (ITS). Accurate and efficient VMMR systems are required in real-world applications including intelligent surveillance and autonomous driving. The [...] Read more.
In recent years, Vehicle Make and Model Recognition (VMMR) has attracted a lot of attention as it plays a crucial role in Intelligent Transportation Systems (ITS). Accurate and efficient VMMR systems are required in real-world applications including intelligent surveillance and autonomous driving. The paper introduces a new large-scale dataset and a novel deep learning paradigm for VMMR. A new large-scale dataset dubbed Diverse large-scale VMM (DVMM) is proposed collecting image-samples with the most popular vehicle brands operating in Europe. A novel VMMR framework is proposed which follows a two-branch architecture performing make and model recognition respectively. A two-stage training procedure and a novel decision module are proposed to process the make and model predictions and compute the final model prediction. In addition, a novel metric based on the true positive rate is proposed to compare classification confusion of the proposed 2B–2S and the baseline methods. A complex experimental validation is carried out, demonstrating the generality, diversity, and practicality of the proposed DVMM dataset. The experimental results show that the proposed framework provides 93.95% accuracy over the more diverse DVMM dataset and 95.85% accuracy over traditional VMMR datasets. The proposed two-branch approach outperforms the conventional one-branch approach for VMMR over small-, medium-, and large-scale datasets by providing lower vehicle model confusion and reduced inter-make ambiguity. The paper demonstrates the advantages of the proposed two-branch VMMR paradigm in terms of robustness and lower confusion relative to single-branch designs. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

18 pages, 2905 KiB  
Article
Anomaly Detection in Industrial IoT Using Distributional Reinforcement Learning and Generative Adversarial Networks
by Hafsa Benaddi, Mohammed Jouhari, Khalil Ibrahimi, Jalel Ben Othman and El Mehdi Amhoud
Sensors 2022, 22(21), 8085; https://0-doi-org.brum.beds.ac.uk/10.3390/s22218085 - 22 Oct 2022
Cited by 16 | Viewed by 4014
Abstract
Anomaly detection is one of the biggest issues of security in the Industrial Internet of Things (IIoT) due to the increase in cyber attack dangers for distributed devices and critical infrastructure networks. To face these challenges, the Intrusion Detection System (IDS) is suggested [...] Read more.
Anomaly detection is one of the biggest issues of security in the Industrial Internet of Things (IIoT) due to the increase in cyber attack dangers for distributed devices and critical infrastructure networks. To face these challenges, the Intrusion Detection System (IDS) is suggested as a robust mechanism to protect and monitor malicious activities in IIoT networks. In this work, we suggest a new mechanism to improve the efficiency and robustness of the IDS system using Distributional Reinforcement Learning (DRL) and the Generative Adversarial Network (GAN). We aim to develop realistic and equilibrated distribution for a given feature set using artificial data in order to overcome the issue of data imbalance. We show how the GAN can efficiently assist the distributional RL-based-IDS in enhancing the detection of minority attacks. To assess the taxonomy of our approach, we verified the effectiveness of our algorithm by using the Distributed Smart Space Orchestration System (DS2OS) dataset. The performance of the normal DRL and DRL-GAN models in binary and multiclass classifications was evaluated based on anomaly detection datasets. The proposed models outperformed the normal DRL in the standard metrics of accuracy, precision, recall, and F1 score. We demonstrated that the GAN introduced in the training process of DRL with the aim of improving the detection of a specific class of data achieves the best results. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

12 pages, 444 KiB  
Article
A Comparative Analysis between Efficient Attention Mechanisms for Traffic Forecasting without Structural Priors
by Andrei-Cristian Rad, Camelia Lemnaru and Adrian Munteanu
Sensors 2022, 22(19), 7457; https://0-doi-org.brum.beds.ac.uk/10.3390/s22197457 - 01 Oct 2022
Cited by 2 | Viewed by 1212
Abstract
Dot-product attention is a powerful mechanism for capturing contextual information. Models that build on top of it have acclaimed state-of-the-art performance in various domains, ranging from sequence modelling to visual tasks. However, the main bottleneck is the construction of the attention map, which [...] Read more.
Dot-product attention is a powerful mechanism for capturing contextual information. Models that build on top of it have acclaimed state-of-the-art performance in various domains, ranging from sequence modelling to visual tasks. However, the main bottleneck is the construction of the attention map, which is quadratic with respect to the number of tokens in the sequence. Consequently, efficient alternatives have been developed in parallel, but it was only recently that their performances were compared and contrasted. This study performs a comparative analysis between some efficient attention mechanisms in the context of a purely attention-based spatio-temporal forecasting model used for traffic prediction. Experiments show that these methods can reduce the training times by up to 28% and the inference times by up to 31%, while the performance remains on par with the baseline. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

20 pages, 7828 KiB  
Article
A Long Short-Term Memory-Based Approach for Detecting Turns and Generating Road Intersections from Vehicle Trajectories
by Zijian Wan, Lianying Li, Huafei Yu and Min Yang
Sensors 2022, 22(18), 6997; https://0-doi-org.brum.beds.ac.uk/10.3390/s22186997 - 15 Sep 2022
Cited by 2 | Viewed by 1748
Abstract
Owing to the widespread use of GPS-enabled devices, sensing road information from vehicle trajectories is becoming an attractive method for road map construction and update. Although the detection of intersections is critical for generating road networks, it is still a challenging task. Traditional [...] Read more.
Owing to the widespread use of GPS-enabled devices, sensing road information from vehicle trajectories is becoming an attractive method for road map construction and update. Although the detection of intersections is critical for generating road networks, it is still a challenging task. Traditional approaches detect intersections by identifying turning points based on the heading changes. As the intersections vary greatly in pattern and size, the appropriate threshold for heading change varies from area to area, which leads to the difficulty of accurate detection. To overcome this shortcoming, we propose a deep learning-based approach to detect turns and generate intersections. First, we convert each trajectory into a feature sequence that stores multiple motion attributes of the vehicle along the trajectory. Next, a supervised method uses these feature sequences and labeled trajectories to train a long short-term memory (LSTM) model that detects turning trajectory segments (TTSs), each of which indicates a turn occurring at an intersection. Finally, the detected TTSs are clustered to obtain the intersection coverages and internal structures. The proposed approach was tested using vehicle trajectories collected in Wuhan, China. The intersection detection precision and recall were 94.0% and 91.9% in a central urban region and 94.1% and 86.7% in a semi-urban region, respectively, which were significantly higher than those of the previously established local G* statistic-based approaches. In addition to the applications for road map development, the newly developed approach may have broad implications for the analysis of spatiotemporal trajectory data. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

9 pages, 2791 KiB  
Communication
Ghostformer: A GhostNet-Based Two-Stage Transformer for Small Object Detection
by Sijia Li, Furkat Sultonov, Jamshid Tursunboev, Jun-Hyun Park, Sangseok Yun and Jae-Mo Kang
Sensors 2022, 22(18), 6939; https://0-doi-org.brum.beds.ac.uk/10.3390/s22186939 - 14 Sep 2022
Cited by 9 | Viewed by 2588
Abstract
In this paper, we propose a novel two-stage transformer with GhostNet, which improves the performance of the small object detection task. Specifically, based on the original Deformable Transformers for End-to-End Object Detection (deformable DETR), we chose GhostNet as the backbone to extract features, [...] Read more.
In this paper, we propose a novel two-stage transformer with GhostNet, which improves the performance of the small object detection task. Specifically, based on the original Deformable Transformers for End-to-End Object Detection (deformable DETR), we chose GhostNet as the backbone to extract features, since it is better suited for an efficient feature extraction. Furthermore, at the target detection stage, we selected the 300 best bounding box results as regional proposals, which were subsequently set as primary object queries of the decoder layer. Finally, in the decoder layer, we optimized and modified the queries to increase the target accuracy. In order to validate the performance of the proposed model, we adopted a widely used COCO 2017 dataset. Extensive experiments demonstrated that the proposed scheme yielded a higher average precision (AP) score in detecting small objects than the existing deformable DETR model. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

16 pages, 10711 KiB  
Article
Single Camera Face Position-Invariant Driver’s Gaze Zone Classifier Based on Frame-Sequence Recognition Using 3D Convolutional Neural Networks
by Catherine Lollett, Mitsuhiro Kamezaki and Shigeki Sugano
Sensors 2022, 22(15), 5857; https://0-doi-org.brum.beds.ac.uk/10.3390/s22155857 - 05 Aug 2022
Cited by 2 | Viewed by 1771
Abstract
Estimating the driver’s gaze in a natural real-world setting can be problematic for different challenging scenario conditions. For example, faces will undergo facial occlusions, illumination, or various face positions while driving. In this effort, we aim to reduce misclassifications in driving situations when [...] Read more.
Estimating the driver’s gaze in a natural real-world setting can be problematic for different challenging scenario conditions. For example, faces will undergo facial occlusions, illumination, or various face positions while driving. In this effort, we aim to reduce misclassifications in driving situations when the driver has different face distances regarding the camera. Three-dimensional Convolutional Neural Networks (CNN) models can make a spatio-temporal driver’s representation that extracts features encoded in multiple adjacent frames that can describe motions. This characteristic may help ease the deficiencies of a per-frame recognition system due to the lack of context information. For example, the front, navigator, right window, left window, back mirror, and speed meter are part of the known common areas to be checked by drivers. Based on this, we implement and evaluate a model that is able to detect the head direction toward these regions having various distances from the camera. In our evaluation, the 2D CNN model had a mean average recall of 74.96% across the three models, whereas the 3D CNN model had a mean average recall of 87.02%. This result show that our proposed 3D CNN-based approach outperforms a 2D CNN per-frame recognition approach in driving situations when the driver’s face has different distances from the camera. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

18 pages, 4344 KiB  
Article
Short-Term Drift Prediction of Multi-Functional Buoys in Inland Rivers Based on Deep Learning
by Fei Zeng, Hongri Ou and Qing Wu
Sensors 2022, 22(14), 5120; https://0-doi-org.brum.beds.ac.uk/10.3390/s22145120 - 07 Jul 2022
Viewed by 1503
Abstract
The multi-functional buoy is an important facility for assisting the navigation of inland waterway ships. Therefore, real-time tracking of its position is an essential process to ensure the safety of ship navigation. Aiming at the problem of the low accuracy of multi-functional buoy [...] Read more.
The multi-functional buoy is an important facility for assisting the navigation of inland waterway ships. Therefore, real-time tracking of its position is an essential process to ensure the safety of ship navigation. Aiming at the problem of the low accuracy of multi-functional buoy drift prediction, an integrated deep learning model incorporating the attention mechanism and ResNet-GRU (RGA) to predict short-term drift values of buoys is proposed. The model has the strong feature expression capability of ResNet and the temporal memory capability of GRU, and the attention mechanism can capture important information adaptively, which can solve the nonlinear time series drift prediction problem well. In this paper, the data collected from multi-functional buoy #4 at Nantong anchorage No. 2 in the Yangtze River waters in China were studied as an example, and first linear interpolation was used for filling in missing values; then, input variables were selected based on Pearson correlation analysis, and finally, the model structure was designed for training and testing. The experimental results show that the mean square error, mean absolute error, root mean square error and mean percentage error of the RGA model on the test set are 5.113036, 1.609969, 2.261202 and 15.575886, respectively, which are significantly better than other models. This study provides a new idea for predicting the short-term drift of multi-functional buoys, which is helpful for their tracking and management. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

13 pages, 1266 KiB  
Article
TA-Unet: Integrating Triplet Attention Module for Drivable Road Region Segmentation
by Sijia Li, Furkat Sultonov, Qingshan Ye, Yong Bai, Jun-Hyun Park, Chilsig Yang, Minseok Song, Sungwoo Koo and Jae-Mo Kang
Sensors 2022, 22(12), 4438; https://0-doi-org.brum.beds.ac.uk/10.3390/s22124438 - 12 Jun 2022
Cited by 1 | Viewed by 2202
Abstract
Road segmentation has been one of the leading research areas in the realm of autonomous driving cars due to the possible benefits autonomous vehicles can offer. Significant reduction of crashes, greater independence for the people with disabilities, and reduced traffic congestion on the [...] Read more.
Road segmentation has been one of the leading research areas in the realm of autonomous driving cars due to the possible benefits autonomous vehicles can offer. Significant reduction of crashes, greater independence for the people with disabilities, and reduced traffic congestion on the roads are some of the vivid examples of them. Considering the importance of self-driving cars, it is vital to develop models that can accurately segment drivable regions of roads. The recent advances in the area of deep learning have presented effective methods and techniques to tackle road segmentation tasks effectively. However, the results of most of them are not satisfactory for implementing them into practice. To tackle this issue, in this paper, we propose a novel model, dubbed as TA-Unet, that is able to produce quality drivable road region segmentation maps. The proposed model incorporates a triplet attention module into the encoding stage of the U-Net network to compute attention weights through the triplet branch structure. Additionally, to overcome the class-imbalance problem, we experiment on different loss functions, and confirm that using a mixed loss function leads to a boost in performance. To validate the performance and efficiency of the proposed method, we adopt the publicly available UAS dataset, and compare its results to the framework of the dataset and also to four state-of-the-art segmentation models. Extensive experiments demonstrate that the proposed TA-Unet outperforms baseline methods both in terms of pixel accuracy and mIoU, with 98.74% and 97.41%, respectively. Finally, the proposed method yields clearer segmentation maps on different sample sets compared to other baseline methods. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

21 pages, 11825 KiB  
Article
Real-Time Vehicle Classification and Tracking Using a Transfer Learning-Improved Deep Learning Network
by Bipul Neupane, Teerayut Horanont and Jagannath Aryal
Sensors 2022, 22(10), 3813; https://0-doi-org.brum.beds.ac.uk/10.3390/s22103813 - 18 May 2022
Cited by 29 | Viewed by 6645
Abstract
Accurate vehicle classification and tracking are increasingly important subjects for intelligent transport systems (ITSs) and for planning that utilizes precise location intelligence. Deep learning (DL) and computer vision are intelligent methods; however, accurate real-time classification and tracking come with problems. We tackle three [...] Read more.
Accurate vehicle classification and tracking are increasingly important subjects for intelligent transport systems (ITSs) and for planning that utilizes precise location intelligence. Deep learning (DL) and computer vision are intelligent methods; however, accurate real-time classification and tracking come with problems. We tackle three prominent problems (P1, P2, and P3): the need for a large training dataset (P1), the domain-shift problem (P2), and coupling a real-time multi-vehicle tracking algorithm with DL (P3). To address P1, we created a training dataset of nearly 30,000 samples from existing cameras with seven classes of vehicles. To tackle P2, we trained and applied transfer learning-based fine-tuning on several state-of-the-art YOLO (You Only Look Once) networks. For P3, we propose a multi-vehicle tracking algorithm that obtains the per-lane count, classification, and speed of vehicles in real time. The experiments showed that accuracy doubled after fine-tuning (71% vs. up to 30%). Based on a comparison of four YOLO networks, coupling the YOLOv5-large network to our tracking algorithm provided a trade-off between overall accuracy (95% vs. up to 90%), loss (0.033 vs. up to 0.036), and model size (91.6 MB vs. up to 120.6 MB). The implications of these results are in spatial information management and sensing for intelligent transport planning. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

11 pages, 3551 KiB  
Communication
Ensuring the Reliability of Virtual Sensors Based on Artificial Intelligence within Vehicle Dynamics Control Systems
by Philipp Maximilian Sieberg and Dieter Schramm
Sensors 2022, 22(9), 3513; https://0-doi-org.brum.beds.ac.uk/10.3390/s22093513 - 05 May 2022
Cited by 4 | Viewed by 2084
Abstract
The use of virtual sensors in vehicles represents a cost-effective alternative to the installation of physical hardware. In addition to physical models resulting from theoretical modeling, artificial intelligence and machine learning approaches are increasingly used, which incorporate experimental modeling. Due to the resulting [...] Read more.
The use of virtual sensors in vehicles represents a cost-effective alternative to the installation of physical hardware. In addition to physical models resulting from theoretical modeling, artificial intelligence and machine learning approaches are increasingly used, which incorporate experimental modeling. Due to the resulting black-box characteristics, virtual sensors based on artificial intelligence are not fully reliable, which can have fatal consequences in safety-critical applications. Therefore, a hybrid method is presented that safeguards the reliability of artificial intelligence-based estimations. The application example is the state estimation of the vehicle roll angle. The state estimation is coupled with a central predictive vehicle dynamics control. The implementation and validation is performed by a co-simulation between IPG CarMaker and MATLAB/Simulink. By using the hybrid method, unreliable estimations by the artificial intelligence-based model resulting from erroneous input signals are detected and handled. Thus, a valid and reliable state estimate is available throughout. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

20 pages, 546 KiB  
Article
Applying Hybrid Lstm-Gru Model Based on Heterogeneous Data Sources for Traffic Speed Prediction in Urban Areas
by Noureen Zafar, Irfan Ul Haq, Jawad-ur-Rehman Chughtai and Omair Shafiq
Sensors 2022, 22(9), 3348; https://0-doi-org.brum.beds.ac.uk/10.3390/s22093348 - 27 Apr 2022
Cited by 21 | Viewed by 5719
Abstract
With the advent of the Internet of Things (IoT), it has become possible to have a variety of data sets generated through numerous types of sensors deployed across large urban areas, thus empowering the notion of smart cities. In smart cities, various types [...] Read more.
With the advent of the Internet of Things (IoT), it has become possible to have a variety of data sets generated through numerous types of sensors deployed across large urban areas, thus empowering the notion of smart cities. In smart cities, various types of sensors may fall into different administrative domains and may be accessible through exposed Application Program Interfaces (APIs). In such setups, for traffic prediction in Intelligent Transport Systems (ITS), one of the major prerequisites is the integration of heterogeneous data sources within a preprocessing data pipeline resulting into hybrid feature space. In this paper, we first present a comprehensive algorithm to integrate heterogeneous data obtained from sensors, services, and exogenous data sources into a hybrid spatial–temporal feature space. Following a rigorous exploratory data analysis, we apply a variety of deep learning algorithms specialized for time series geospatial data and perform a comparative analysis of Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Convolutional Neural Network (CNN), and their hybrid combinations. The hybrid LSTM–GRU model outperforms the rest with Root Mean Squared Error (RMSE) of 4.5 and Mean Absolute Percentage Error (MAPE) of 6.67%. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

18 pages, 8116 KiB  
Article
Bike-Sharing Demand Prediction at Community Level under COVID-19 Using Deep Learning
by Aliasghar Mehdizadeh Dastjerdi and Catherine Morency
Sensors 2022, 22(3), 1060; https://0-doi-org.brum.beds.ac.uk/10.3390/s22031060 - 29 Jan 2022
Cited by 24 | Viewed by 4332
Abstract
An important question in planning and designing bike-sharing services is to support the user’s travel demand by allocating bikes at the stations in an efficient and reliable manner which may require accurate short-time demand prediction. This study focuses on the short-term forecasting, 15 [...] Read more.
An important question in planning and designing bike-sharing services is to support the user’s travel demand by allocating bikes at the stations in an efficient and reliable manner which may require accurate short-time demand prediction. This study focuses on the short-term forecasting, 15 min ahead, of the shared bikes demand in Montreal using a deep learning approach. Having a set of bike trips, the study first identifies 6 communities in the bike-sharing network using the Louvain algorithm. Then, four groups of LSTM-based architectures are adopted to predict pickup demand in each community. A univariate ARIMA model is also used to compare results as a benchmark. The historical trip data from 2017 to 2021 are used in addition to the extra inputs of demand related engineered features, weather conditions, and temporal variables. The selected timespan allows predicting bike demand during the COVID-19 pandemic. Results show that the deep learning models significantly outperform the ARIMA one. The hybrid CNN-LSTM achieves the highest prediction accuracy. Furthermore, adding the extra variables improves the model performance regardless of its architecture. Thus, using the hybrid structure enriched with additional input features provides a better insight into the bike demand patterns, in support of bike-sharing operational management. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

19 pages, 3179 KiB  
Article
Hyperparameter Optimization Techniques for Designing Software Sensors Based on Artificial Neural Networks
by Sebastian Blume, Tim Benedens and Dieter Schramm
Sensors 2021, 21(24), 8435; https://0-doi-org.brum.beds.ac.uk/10.3390/s21248435 - 17 Dec 2021
Cited by 10 | Viewed by 2628
Abstract
Software sensors are playing an increasingly important role in current vehicle development. Such soft sensors can be based on both physical modeling and data-based modeling. Data-driven modeling is based on building a model purely on captured data which means that no system knowledge [...] Read more.
Software sensors are playing an increasingly important role in current vehicle development. Such soft sensors can be based on both physical modeling and data-based modeling. Data-driven modeling is based on building a model purely on captured data which means that no system knowledge is required for the application. At the same time, hyperparameters have a particularly large influence on the quality of the model. These parameters influence the architecture and the training process of the machine learning algorithm. This paper deals with the comparison of different hyperparameter optimization methods for the design of a roll angle estimator based on an artificial neural network. The comparison is drawn based on a pre-generated simulation data set created with ISO standard driving maneuvers. Four different optimization methods are used for the comparison. Random Search and Hyperband are two similar methods based purely on randomness, whereas Bayesian Optimization and the genetic algorithm are knowledge-based methods, i.e., they process information from previous iterations. The objective function for all optimization methods consists of the root mean square error of the training process and the reference data generated in the simulation. To guarantee a meaningful result, k-fold cross-validation is integrated for the training process. Finally, all methods are applied to the predefined parameter space. It is shown that the knowledge-based methods lead to better results. In particular, the Genetic Algorithm leads to promising solutions in this application. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

18 pages, 3669 KiB  
Article
Hourly Origin–Destination Matrix Estimation Using Intelligent Transportation Systems Data and Deep Learning
by Shahriar Afandizadeh Zargari, Amirmasoud Memarnejad and Hamid Mirzahossein
Sensors 2021, 21(21), 7080; https://0-doi-org.brum.beds.ac.uk/10.3390/s21217080 - 26 Oct 2021
Cited by 6 | Viewed by 3739
Abstract
Predicting the travel demand plays an indispensable role in urban transportation planning. Data collection methods for estimating the origin–destination (OD) demand matrix are being extensively shifted from traditional survey techniques to the pre-collected data from intelligent transportation systems (ITSs). This shift is partly [...] Read more.
Predicting the travel demand plays an indispensable role in urban transportation planning. Data collection methods for estimating the origin–destination (OD) demand matrix are being extensively shifted from traditional survey techniques to the pre-collected data from intelligent transportation systems (ITSs). This shift is partly due to the high cost of conducting traditional surveys and partly due to the diversity of scattered data produced by ITSs and the opportunity to derive extra benefits out of this big data. This study attempts to predict the OD matrix of Tehran metropolis using a set of ITS data, including the data extracted from automatic number plate recognition (ANPR) cameras, smart fare cards, loop detectors at intersections, global positioning systems (GPS) of navigation software, socio-economic and demographic characteristics as well as land-use features of zones. For this purpose, five models based on machine learning (ML) techniques are developed for training and test. In evaluating the performance of the models, the statistical methods show that the convolutional neural network (CNN) leads to the best performance in terms of accuracy in predicting the OD matrix and has the lowest error in terms of root mean square error (RMSE) and mean absolute percentage error (MAPE). Moreover, the predicted OD matrix was structurally compared with the ground truth matrix, and the CNN model also shows the highest structural similarity with the ground truth OD matrix in the presented case. Full article
(This article belongs to the Special Issue Application of Deep Learning in Intelligent Transportation)
Show Figures

Figure 1

Back to TopTop