sensors-logo

Journal Browser

Journal Browser

Advanced Computer Vision Techniques for Autonomous Driving

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 42251

Special Issue Editors


E-Mail Website
Guest Editor
Associate Professor, Department of Computer Science, Faculty of Computers and Information, South Valley University, Qena, Egypt
Interests: computer vision; image processing; object detection and tracking; scenes understanding; deep learning; biometrics; security
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
Department of Computer Science, Tulane University, New Orleans, LA 70118, USA
Interests: Transfer learning; Domain adaptation; Deep learning and Multi-view learning; Vehicle and Person Re-Identification

E-Mail Website
Guest Editor
AI Architect, Autonomous Driving, Valeo Vision Systems, Ireland
Interests: Autonomous driving; computer vision; Deep learning; Semantic segmentation; Automated parking systems

Special Issue Information

Dear Colleagues,

Autonomous driving (AD) refers to self-driving vehicles or any transport system moves without humans. Automotive systems are equipped with cameras and sensors to cover all the fields of view and range. Further, sensor architecture in AD includes multiple sets of cameras, radars, and LIDARs, as well as GPS-GNSS for absolute localization and inertial measurement units that provide a 3D pose of the vehicle in space. Representation of the environment state or scene understanding is utilized by a decision-making system to produce the final driving policy, which can be achieved by a combination of several perception or computer vision tasks such as semantic segmentation, motion estimation, depth estimation, and soiling detection. Computer vision is as a key technique in AD technologies. Thus, there is a need to explore new and emerging trends in computer vision for autonomous driving. This Special Issue aims to address the most up-to-date impacts of computer vision on progress autonomous driving research. Topics of interest include but are not limited to:

  • New trends on vision and sensors for autonomous driving;
  • Vision‐based traffic flow analysis and smart vehicle technologies;
  • Vehicle trajectory prediction in autonomous driving;
  • Vehicle classification and semantic segmentation;
  • Traffic sign detection, recognition, and scene understanding;
  • Detection, tracking, learning, and predicting on-road pedestrian behavior;
  • Object detection and tracking on Fisheye cameras for autonomous driving;
  • Unsupervised, weakly-supervised, and reinforcement deep.

Prof. Dr. M. Hassaballah
Prof. Dr. Zhengming Ding
Dr. Senthil Yogamani
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 
 

Keywords

  • Autonomous driving
  • Computer vision
  • Automated parking
  • Scene understanding
  • Vehicle trajectory prediction
  • Object detection and tracking
  • Semantic segmentation
  • Tracking using a Lidar sensor
  • Deep neural network models
  • Smart sensors
 
 

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

14 pages, 526 KiB  
Article
WPO-Net: Windowed Pose Optimization Network for Monocular Visual Odometry Estimation
by Nivesh Gadipudi, Irraivan Elamvazuthi, Cheng-Kai Lu, Sivajothi Paramasivam and Steven Su
Sensors 2021, 21(23), 8155; https://0-doi-org.brum.beds.ac.uk/10.3390/s21238155 - 06 Dec 2021
Cited by 5 | Viewed by 2214
Abstract
Visual odometry is the process of estimating incremental localization of the camera in 3-dimensional space for autonomous driving. There have been new learning-based methods which do not require camera calibration and are robust to external noise. In this work, a new method that [...] Read more.
Visual odometry is the process of estimating incremental localization of the camera in 3-dimensional space for autonomous driving. There have been new learning-based methods which do not require camera calibration and are robust to external noise. In this work, a new method that do not require camera calibration called the “windowed pose optimization network” is proposed to estimate the 6 degrees of freedom pose of a monocular camera. The architecture of the proposed network is based on supervised learning-based methods with feature encoder and pose regressor that takes multiple consecutive two grayscale image stacks at each step for training and enforces the composite pose constraints. The KITTI dataset is used to evaluate the performance of the proposed method. The proposed method yielded rotational error of 3.12 deg/100 m, and the training time is 41.32 ms, while inference time is 7.87 ms. Experiments demonstrate the competitive performance of the proposed method to other state-of-the-art related works which shows the novelty of the proposed technique. Full article
(This article belongs to the Special Issue Advanced Computer Vision Techniques for Autonomous Driving)
Show Figures

Figure 1

18 pages, 59748 KiB  
Article
A Comparison of Bottom-Up Models for Spatial Saliency Predictions in Autonomous Driving
by Jaime Maldonado and Lino Antoni Giefer
Sensors 2021, 21(20), 6825; https://0-doi-org.brum.beds.ac.uk/10.3390/s21206825 - 14 Oct 2021
Cited by 3 | Viewed by 2397
Abstract
Bottom-up saliency models identify the salient regions of an image based on features such as color, intensity and orientation. These models are typically used as predictors of human visual behavior and for computer vision tasks. In this paper, we conduct a systematic evaluation [...] Read more.
Bottom-up saliency models identify the salient regions of an image based on features such as color, intensity and orientation. These models are typically used as predictors of human visual behavior and for computer vision tasks. In this paper, we conduct a systematic evaluation of the saliency maps computed with four selected bottom-up models on images of urban and highway traffic scenes. Saliency both over whole images and on object level is investigated and elaborated in terms of the energy and the entropy of the saliency maps. We identify significant differences with respect to the amount, size and shape-complexity of the salient areas computed by different models. Based on these findings, we analyze the likelihood that object instances fall within the salient areas of an image and investigate the agreement between the segments of traffic participants and the saliency maps of the different models. The overall and object-level analysis provides insights on the distinctive features of salient areas identified by different models, which can be used as selection criteria for prospective applications in autonomous driving such as object detection and tracking. Full article
(This article belongs to the Special Issue Advanced Computer Vision Techniques for Autonomous Driving)
Show Figures

Figure 1

18 pages, 8843 KiB  
Article
3D Object Detection with SLS-Fusion Network in Foggy Weather Conditions
by Nguyen Anh Minh Mai, Pierre Duthon, Louahdi Khoudour, Alain Crouzil and Sergio A. Velastin
Sensors 2021, 21(20), 6711; https://0-doi-org.brum.beds.ac.uk/10.3390/s21206711 - 09 Oct 2021
Cited by 21 | Viewed by 3955
Abstract
The role of sensors such as cameras or LiDAR (Light Detection and Ranging) is crucial for the environmental awareness of self-driving cars. However, the data collected from these sensors are subject to distortions in extreme weather conditions such as fog, rain, and snow. [...] Read more.
The role of sensors such as cameras or LiDAR (Light Detection and Ranging) is crucial for the environmental awareness of self-driving cars. However, the data collected from these sensors are subject to distortions in extreme weather conditions such as fog, rain, and snow. This issue could lead to many safety problems while operating a self-driving vehicle. The purpose of this study is to analyze the effects of fog on the detection of objects in driving scenes and then to propose methods for improvement. Collecting and processing data in adverse weather conditions is often more difficult than data in good weather conditions. Hence, a synthetic dataset that can simulate bad weather conditions is a good choice to validate a method, as it is simpler and more economical, before working with a real dataset. In this paper, we apply fog synthesis on the public KITTI dataset to generate the Multifog KITTI dataset for both images and point clouds. In terms of processing tasks, we test our previous 3D object detector based on LiDAR and camera, named the Spare LiDAR Stereo Fusion Network (SLS-Fusion), to see how it is affected by foggy weather conditions. We propose to train using both the original dataset and the augmented dataset to improve performance in foggy weather conditions while keeping good performance under normal conditions. We conducted experiments on the KITTI and the proposed Multifog KITTI datasets which show that, before any improvement, performance is reduced by 42.67% in 3D object detection for Moderate objects in foggy weather conditions. By using a specific strategy of training, the results significantly improved by 26.72% and keep performing quite well on the original dataset with a drop only of 8.23%. In summary, fog often causes the failure of 3D detection on driving scenes. By additional training with the augmented dataset, we significantly improve the performance of the proposed 3D object detection algorithm for self-driving cars in foggy weather conditions. Full article
(This article belongs to the Special Issue Advanced Computer Vision Techniques for Autonomous Driving)
Show Figures

Figure 1

15 pages, 3270 KiB  
Article
Transfer Learning Based Semantic Segmentation for 3D Object Detection from Point Cloud
by Muhammad Imad, Oualid Doukhi and Deok-Jin Lee
Sensors 2021, 21(12), 3964; https://0-doi-org.brum.beds.ac.uk/10.3390/s21123964 - 08 Jun 2021
Cited by 27 | Viewed by 7176
Abstract
Three-dimensional object detection utilizing LiDAR point cloud data is an indispensable part of autonomous driving perception systems. Point cloud-based 3D object detection has been a better replacement for higher accuracy than cameras during nighttime. However, most LiDAR-based 3D object methods work in a [...] Read more.
Three-dimensional object detection utilizing LiDAR point cloud data is an indispensable part of autonomous driving perception systems. Point cloud-based 3D object detection has been a better replacement for higher accuracy than cameras during nighttime. However, most LiDAR-based 3D object methods work in a supervised manner, which means their state-of-the-art performance relies heavily on a large-scale and well-labeled dataset, while these annotated datasets could be expensive to obtain and only accessible in the limited scenario. Transfer learning is a promising approach to reduce the large-scale training datasets requirement, but existing transfer learning object detectors are primarily for 2D object detection rather than 3D. In this work, we utilize the 3D point cloud data more effectively by representing the birds-eye-view (BEV) scene and propose a transfer learning based point cloud semantic segmentation for 3D object detection. The proposed model minimizes the need for large-scale training datasets and consequently reduces the training time. First, a preprocessing stage filters the raw point cloud data to a BEV map within a specific field of view. Second, the transfer learning stage uses knowledge from the previously learned classification task (with more data for training) and generalizes the semantic segmentation-based 2D object detection task. Finally, 2D detection results from the BEV image have been back-projected into 3D in the postprocessing stage. We verify results on two datasets: the KITTI 3D object detection dataset and the Ouster LiDAR-64 dataset, thus demonstrating that the proposed method is highly competitive in terms of mean average precision (mAP up to 70%) while still running at more than 30 frames per second (FPS). Full article
(This article belongs to the Special Issue Advanced Computer Vision Techniques for Autonomous Driving)
Show Figures

Figure 1

Review

Jump to: Research

35 pages, 2507 KiB  
Review
Automatic Number Plate Recognition:A Detailed Survey of Relevant Algorithms
by Lubna, Naveed Mufti and Syed Afaq Ali Shah
Sensors 2021, 21(9), 3028; https://0-doi-org.brum.beds.ac.uk/10.3390/s21093028 - 26 Apr 2021
Cited by 69 | Viewed by 23831
Abstract
Technologies and services towards smart-vehicles and Intelligent-Transportation-Systems (ITS), continues to revolutionize many aspects of human life. This paper presents a detailed survey of current techniques and advancements in Automatic-Number-Plate-Recognition (ANPR) systems, with a comprehensive performance comparison of various real-time tested and simulated algorithms, [...] Read more.
Technologies and services towards smart-vehicles and Intelligent-Transportation-Systems (ITS), continues to revolutionize many aspects of human life. This paper presents a detailed survey of current techniques and advancements in Automatic-Number-Plate-Recognition (ANPR) systems, with a comprehensive performance comparison of various real-time tested and simulated algorithms, including those involving computer vision (CV). ANPR technology has the ability to detect and recognize vehicles by their number-plates using recognition techniques. Even with the best algorithms, a successful ANPR system deployment may require additional hardware to maximize its accuracy. The number plate condition, non-standardized formats, complex scenes, camera quality, camera mount position, tolerance to distortion, motion-blur, contrast problems, reflections, processing and memory limitations, environmental conditions, indoor/outdoor or day/night shots, software-tools or other hardware-based constraint may undermine its performance. This inconsistency, challenging environments and other complexities make ANPR an interesting field for researchers. The Internet-of-Things is beginning to shape future of many industries and is paving new ways for ITS. ANPR can be well utilized by integrating with RFID-systems, GPS, Android platforms and other similar technologies. Deep-Learning techniques are widely utilized in CV field for better detection rates. This research aims to advance the state-of-knowledge in ITS (ANPR) built on CV algorithms; by citing relevant prior work, analyzing and presenting a survey of extraction, segmentation and recognition techniques whilst providing guidelines on future trends in this area. Full article
(This article belongs to the Special Issue Advanced Computer Vision Techniques for Autonomous Driving)
Show Figures

Figure 1

Back to TopTop