Special Issue "Advanced Scene Perception for Augmented Reality"

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Mixed, Augmented and Virtual Reality".

Deadline for manuscript submissions: 15 August 2021.

Special Issue Editors

Prof. Dr. Didier Stricker
E-Mail Website
Guest Editor
German Research Center for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany
Interests: 3D computer vision; augmented reality; SLAM; sensor fusion; activity/workflow modelling and recognition; semantic segmentation; hand-object interaction; real-time edge AI for AR
Dr. Jason Rambach
E-Mail Website
Guest Editor
German Research Center for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany
Interests: 3D computer vision; augmented reality; object pose estimation and tracking; machine learning; sensor fusion; domain adaptation; SLAM; 3D sensing

Special Issue Information

Dear Colleagues,

Augmented Reality (AR), combining virtual elements with the real world, has shown impressive results in a variety of application fields and has gained significant attention in recent years due to its limitless potential. AR applications rely heavily on the quality and extent of understanding of the user’s surroundings as well as the dynamic monitoring of the user’s interactions with his environment. While traditional AR relied on the precise localization of the user, nowadays a deeper scene perception at multiple levels is expected, ranging from dense environment reconstruction and semantic understanding to hand–object interaction and action recognition. An advanced, efficient understanding of surroundings enables AR applications that support full interaction between real and virtual elements and are able to monitor and support users reliably in real-world complex tasks such as industrial maintenance or medical procedures.

In this Special Issue, we aim to feature novel research that advances the state-of-the-art in scene perception for AR contributing to topics including semantic SLAM, object pose estimation and tracking, dynamic scene analysis, 3D environmental sensing and sensor fusion, hand tracking and hand–object interaction, illumination reconstruction. Comprehensive state-of-the-art reviews on relevant topics and innovative AR applications taking advantage of recent scene perception developments are also highly welcome.

Prof. Dr. Didier Stricker
Dr. Jason Rambach
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Augmented Reality
  • 3D Computer Vision
  • Semantic SLAM
  • Object detection/pose and shape
  • Machine Learning/ Deep Learning
  • Hand-Object Interaction
  • 3D Sensing
  • AR and Edge AI

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
From IR Images to Point Clouds to Pose: Point Cloud-Based AR Glasses Pose Estimation
J. Imaging 2021, 7(5), 80; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050080 - 27 Apr 2021
Viewed by 491
Abstract
In this paper, we propose two novel AR glasses pose estimation algorithms from single infrared images by using 3D point clouds as an intermediate representation. Our first approach “PointsToRotation” is based on a Deep Neural Network alone, whereas our second approach “PointsToPose” is [...] Read more.
In this paper, we propose two novel AR glasses pose estimation algorithms from single infrared images by using 3D point clouds as an intermediate representation. Our first approach “PointsToRotation” is based on a Deep Neural Network alone, whereas our second approach “PointsToPose” is a hybrid model combining Deep Learning and a voting-based mechanism. Our methods utilize a point cloud estimator, which we trained on multi-view infrared images in a semi-supervised manner, generating point clouds based on one image only. We generate a point cloud dataset with our point cloud estimator using the HMDPose dataset, consisting of multi-view infrared images of various AR glasses with the corresponding 6-DoF poses. In comparison to another point cloud-based 6-DoF pose estimation named CloudPose, we achieve an error reduction of around 50%. Compared to a state-of-the-art image-based method, we reduce the pose estimation error by around 96%. Full article
(This article belongs to the Special Issue Advanced Scene Perception for Augmented Reality)
Show Figures

Figure 1

Back to TopTop