Visual Localization

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (1 May 2022) | Viewed by 11160

Special Issue Editor

Laboratoire d’Informatique, du Traitement de l’Information et des Systèmes (LITIS), University of Rouen Normandy, 76800 Saint Etienne du Rouvray, France
Interests: computer vision; localization; artificial intelligence; calibration; autonomous vehicle; image processing; mobile robotics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The tasks involved in autonomous navigation (UAVs, robots and autonomous vehicles) can be categorized into five major modules: perception, localization, mapping, planning and control.

The localization module aims to determine the vehicle's pose (3D location and orientation) and plays a critical role in autonomous navigation. Navigation safety and comfort are highly dependent on the accuracy and robustness of this module.

This localization can be absolute (GPS coordinates or metric coordinates in a known map) or relative (the localization of the vehicle with respect to its lane, with respect to its initial pose, etc.). Although there are systems dedicated to localization, such as GPS, the accuracy of localization and signal loss in difficult environments (indoor or urban environments) make them unsuitable for autonomous navigation.

When the localization module uses only one camera, it is referred to as visual localization. The latter is particularly important for improving the accuracy and robustness of localization in difficult environments.

This Special Issue of the Journal of Imaging aims to feature papers on recent advances in visual localization. All levels of localization are of interest for this Special Issue (i.e., visual odometry, structure from motion, simultaneous localization and mapping, and place recognition) for any method based on the use of at least one camera. We also encourage work based on multisensor fusion and on the use of emerging imaging techniques (plenoptic, event camera, etc.).

Prof. Dr. Rémi Boutteau
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Visual localization
  • Visual odometry
  • Structure from motion (SfM)
  • Simultaneous localization and mapping (SLAM)
  • Bundle adjustment
  • Place recognition
  • Mapping
  • Tracking
  • Pose estimation
  • Long-term visual localization
  • Localization with emerging sensors (plenoptic camera and event camera)
  • Object detection and localization
  • Visual descriptors for efficient localization
  • Sensor fusion for localization (camera/lidar, visual/inertial, etc.)
  • Indoor localization
  • Deep learning for visual localization
  • Semantic visual localization

Related Special Issue

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 11653 KiB  
Article
A Real-Time Method for Time-to-Collision Estimation from Aerial Images
by Daniel Tøttrup, Stinus Lykke Skovgaard, Jonas le Fevre Sejersen and Rui Pimentel de Figueiredo
J. Imaging 2022, 8(3), 62; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8030062 - 03 Mar 2022
Cited by 5 | Viewed by 2467 | Correction
Abstract
Large vessels such as container ships rely on experienced pilots with extensive knowledge of the local streams and tides responsible for maneuvering the vessel to its desired location. This work proposes estimating time-to-collision (TTC) between moving objects (i.e., vessels) using real-time video data [...] Read more.
Large vessels such as container ships rely on experienced pilots with extensive knowledge of the local streams and tides responsible for maneuvering the vessel to its desired location. This work proposes estimating time-to-collision (TTC) between moving objects (i.e., vessels) using real-time video data captured from aerial drones in dynamic maritime environments. Our deep-learning-based methods utilize features optimized with realistic virtually generated data for reliable and robust object detection, segmentation, and tracking. Furthermore, we use rotated bounding box representations, obtained from fine semantic segmentation of objects, for enhanced TTC estimation accuracy. We intuitively present collision estimates as collision arrows that gradually change color to red to indicate an imminent collision. Experiments conducted in a realistic dockyard virtual environment show that our approaches precisely, robustly, and efficiently predict TTC between dynamic objects seen from a top-view, with a mean error and a standard deviation of 0.358 and 0.114 s, respectively, in a worst-case scenario. Full article
(This article belongs to the Special Issue Visual Localization)
Show Figures

Figure 1

25 pages, 12342 KiB  
Article
Methodology for the Automated Visual Detection of Bird and Bat Collision Fatalities at Onshore Wind Turbines
by Christof Happ, Alexander Sutor and Klaus Hochradel
J. Imaging 2021, 7(12), 272; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7120272 - 09 Dec 2021
Cited by 1 | Viewed by 2174
Abstract
The number of collision fatalities is one of the main quantification measures for research concerning wind power impacts on birds and bats. Despite being integral in ongoing investigations as well as regulatory approvals, the state-of-the-art method for the detection of fatalities remains a [...] Read more.
The number of collision fatalities is one of the main quantification measures for research concerning wind power impacts on birds and bats. Despite being integral in ongoing investigations as well as regulatory approvals, the state-of-the-art method for the detection of fatalities remains a manual search by humans or dogs. This is expensive, time consuming and the efficiency varies greatly among different studies. Therefore, we developed a methodology for the automatic detection using visual/near-infrared cameras for daytime and thermal cameras for nighttime. The cameras can be installed in the nacelle of wind turbines and monitor the area below. The methodology is centered around software that analyzes the images in real time using pixel-wise and region-based methods. We found that the structural similarity is the most important measure for the decision about a detection. Phantom drop tests in the actual wind test field with the system installed on 75 m above the ground resulted in a sensitivity of 75.6% for the nighttime detection and 84.3% for the daylight detection. The night camera detected 2.47 false positives per hour using a time window designed for our phantom drop tests. However, in real applications this time window can be extended to eliminate false positives caused by nightly active animals. Excluding these from our data reduced the false positive rate to 0.05. The daylight camera detected 0.20 false positives per hour. Our proposed method has the advantages of being more consistent, more objective, less time consuming, and less expensive than manual search methods. Full article
(This article belongs to the Special Issue Visual Localization)
Show Figures

Figure 1

30 pages, 5078 KiB  
Article
A Fast and Accurate Approach to Multiple-Vehicle Localization and Tracking from Monocular Aerial Images
by Daniel Tøttrup, Stinus Lykke Skovgaard, Jonas le Fevre Sejersen and Rui Pimentel de Figueiredo
J. Imaging 2021, 7(12), 270; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7120270 - 08 Dec 2021
Cited by 3 | Viewed by 2011
Abstract
In this work we present a novel end-to-end solution for tracking objects (i.e., vessels), using video streams from aerial drones, in dynamic maritime environments. Our method relies on deep features, which are learned using realistic simulation data, for robust object detection, segmentation and [...] Read more.
In this work we present a novel end-to-end solution for tracking objects (i.e., vessels), using video streams from aerial drones, in dynamic maritime environments. Our method relies on deep features, which are learned using realistic simulation data, for robust object detection, segmentation and tracking. Furthermore, we propose the use of rotated bounding-box representations, which are computed by taking advantage of pixel-level object segmentation, for improved tracking accuracy, by reducing erroneous data associations during tracking, when combined with the appearance-based features. A thorough set of experiments and results obtained in a realistic shipyard simulation environment, demonstrate that our method can accurately, and fast detect and track dynamic objects seen from a top-view. Full article
(This article belongs to the Special Issue Visual Localization)
Show Figures

Figure 1

15 pages, 10020 KiB  
Article
Real-Time 3D Multi-Object Detection and Localization Based on Deep Learning for Road and Railway Smart Mobility
by Antoine Mauri, Redouane Khemmar, Benoit Decoux, Madjid Haddad and Rémi Boutteau
J. Imaging 2021, 7(8), 145; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7080145 - 12 Aug 2021
Cited by 11 | Viewed by 3504
Abstract
For smart mobility, autonomous vehicles, and advanced driver-assistance systems (ADASs), perception of the environment is an important task in scene analysis and understanding. Better perception of the environment allows for enhanced decision making, which, in turn, enables very high-precision actions. To this end, [...] Read more.
For smart mobility, autonomous vehicles, and advanced driver-assistance systems (ADASs), perception of the environment is an important task in scene analysis and understanding. Better perception of the environment allows for enhanced decision making, which, in turn, enables very high-precision actions. To this end, we introduce in this work a new real-time deep learning approach for 3D multi-object detection for smart mobility not only on roads, but also on railways. To obtain the 3D bounding boxes of the objects, we modified a proven real-time 2D detector, YOLOv3, to predict 3D object localization, object dimensions, and object orientation. Our method has been evaluated on KITTI’s road dataset as well as on our own hybrid virtual road/rail dataset acquired from the video game Grand Theft Auto (GTA) V. The evaluation of our method on these two datasets shows good accuracy, but more importantly that it can be used in real-time conditions, in road and rail traffic environments. Through our experimental results, we also show the importance of the accuracy of prediction of the regions of interest (RoIs) used in the estimation of 3D bounding box parameters. Full article
(This article belongs to the Special Issue Visual Localization)
Show Figures

Figure 1

Back to TopTop