sensors-logo

Journal Browser

Journal Browser

Sensors for 3D Cameras System

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 March 2023) | Viewed by 6445

Special Issue Editors

Institute of Applied Sciences and Intelligent Systems (ISASI), National Research Council of Italy (CNR), 80078 Pozzuoli, Italy
Interests: computer vision; machine learning; signal processing; assistive technology
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Piazza Leonardo da Vinci, 32, 20133 Milan, Italy
Interests: computer vision; 3D acquisition systems; tomography; deep learning; pattern recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue will focus on recent advances of specific technologies and applications for 3D imaging. 3D sensing and related applications is a rapidly growing research field where novel solutions are catching significant interest both from academic and industrial people. According to Allied Market Research, the global 3D camera market is projected to reach $11.13 billion by 2024, registering a Compound Annual Growth Rate of 37.1% during the period of 2018 to 2024.

Nowadays, a large number of medium to high level smartphones started integrating 3D sensors for applications ranging from 3D face recognition to avatar creation and photography enhancement, but 3D sensing is also becoming crucial for self-driving cars where LiDARs represent the main device for pedestrian and vehicle detection estimating their distance. Another field where 3D sensors are playing a crucial role is industry 4.0 and manufacturing, where their combination with AI has become a fundamental element for quality control in production lines, driverless transport systems, or random bin picking.

Assistive technologies also greatly benefit from 3D sensing, as does the analysis of sports contexts and the monitoring of natural and manufactured structures for disaster prevention and resilience tasks.

The aforementioned applicative fields are a few examples of contexts where 3D sensing is capturing a growing relevance and in this Special Issue innovative sensors, dedicated hardware and related applications (e.g., stereoscopic imaging, structured light, SPAD cameras or Time-of-Flight Sensing) are welcomed.

Dr. Marco Leo
Prof. Dr. Marco Marcon
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 3D sensors
  • Stereo Vision
  • Structured Light cameras
  • Time of Flight cameras
  • Depth maps acquisition
  • LiDAR

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 11309 KiB  
Article
Comprehensive High-Quality Three-Dimensional Display System Based on a Simplified Light-Field Image Acquisition Method and a Full-Connected Deep Neural Network
by Munkh-Uchral Erdenebat, Tuvshinjargal Amgalan, Anar Khuderchuluun, Oh-Seung Nam, Seok-Hee Jeon, Ki-Chul Kwon and Nam Kim
Sensors 2023, 23(14), 6245; https://0-doi-org.brum.beds.ac.uk/10.3390/s23146245 - 08 Jul 2023
Cited by 2 | Viewed by 1080
Abstract
We propose a high-quality, three-dimensional display system based on a simplified light field image acquisition method, and a custom-trained full-connected deep neural network is proposed. The ultimate goal of the proposed system is to acquire and reconstruct the light field images with possibly [...] Read more.
We propose a high-quality, three-dimensional display system based on a simplified light field image acquisition method, and a custom-trained full-connected deep neural network is proposed. The ultimate goal of the proposed system is to acquire and reconstruct the light field images with possibly the most elevated quality from the real-world objects in a general environment. A simplified light field image acquisition method acquires the three-dimensional information of natural objects in a simple way, with high-resolution/high-quality like multicamera-based methods. We trained a full-connected deep neural network model to output desired viewpoints of the object with the same quality. The custom-trained instant neural graphics primitives model with hash encoding output the overall desired viewpoints of the object within the acquired viewing angle in the same quality, based on the input perspectives, according to the pixel density of a display device and lens array specifications within the significantly short processing time. Finally, the elemental image array was rendered through the pixel re-arrangement from the entire viewpoints to visualize the entire field-of-view and re-constructed as a high-quality three-dimensional visualization on the integral imaging display. The system was implemented successfully, and the displayed visualizations and corresponding evaluated results confirmed that the proposed system offers a simple and effective way to acquire light field images from real objects with high-resolution and present high-quality three-dimensional visualization on the integral imaging display system. Full article
(This article belongs to the Special Issue Sensors for 3D Cameras System)
Show Figures

Figure 1

28 pages, 109972 KiB  
Article
3D Reconstruction with Single-Shot Structured Light RGB Line Pattern
by Yikang Li and Zhenzhou Wang
Sensors 2021, 21(14), 4819; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144819 - 14 Jul 2021
Cited by 10 | Viewed by 3605
Abstract
Single-shot 3D reconstruction technique is very important for measuring moving and deforming objects. After many decades of study, a great number of interesting single-shot techniques have been proposed, yet the problem remains open. In this paper, a new approach is proposed to reconstruct [...] Read more.
Single-shot 3D reconstruction technique is very important for measuring moving and deforming objects. After many decades of study, a great number of interesting single-shot techniques have been proposed, yet the problem remains open. In this paper, a new approach is proposed to reconstruct deforming and moving objects with the structured light RGB line pattern. The structured light RGB line pattern is coded using parallel red, green, and blue lines with equal intervals to facilitate line segmentation and line indexing. A slope difference distribution (SDD)-based image segmentation method is proposed to segment the lines robustly in the HSV color space. A method of exclusion is proposed to index the red lines, the green lines, and the blue lines respectively and robustly. The indexed lines in different colors are fused to obtain a phase map for 3D depth calculation. The quantitative accuracies of measuring a calibration grid and a ball achieved by the proposed approach are 0.46 and 0.24 mm, respectively, which are significantly lower than those achieved by the compared state-of-the-art single-shot techniques. Full article
(This article belongs to the Special Issue Sensors for 3D Cameras System)
Show Figures

Figure 1

Back to TopTop