sensors-logo

Journal Browser

Journal Browser

Recent Advances in Depth Sensors and Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Optical Sensors".

Deadline for manuscript submissions: closed (31 October 2021) | Viewed by 29012

Special Issue Editors


E-Mail Website
Guest Editor
Dipartimento di Ingegneria Industriale, Università di Firenze, 3 - 50139 Firenze, Italy
Interests: reverse engineering; 3D digital modeling; computational geometry; computer-aided design; additive manufacturing; biomedical applications of additive manufacturing; biomedical applications of reverse engineering
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Dipartimento di Ingegneria Industriale, Università di Firenze, Via di Santa Marta, 3 - 50139 Firenze, Italy
Interests: CAD; personalized medicine; additive manufacturing; 3D scanning; prototyping; reverse engineering; topology optimization; CAD modeling; CAD reconstruction; computer vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Dolleagues,

Depth sensors have received considerable attention due to their wide range of applications, such as product inspection and qualification, 3D sensing and autonomous vehicles, human–computer interaction, metrology, reverse engineering, scene reconstruction, biomedicine, cultural heritage, gaming, and augmented reality.

The field of depth sensing is characterized by a wide range of technologies such as active or passive triangulation, time-of-flight, and ultrasound, amongst many others. Depth sensors span from high-end devices, primarily dedicated to high-accuracy and high-resolution 3D scanning, to low cost solutions, which are generally oriented toward real-time 3D acquisition. The wide variety of devices produced a dynamic field of research, with continuous innovation and capable of encouraging the development of many applications across several technical fields.

In this context, this Special Issue aims to describe latest trends in development and application of depth sensing technologies. The Issue welcomes contributions concerning the study of new hardware and the improvement of existing sensor performance, as well as descriptions of meaningful and innovative applications of depth sensors in the widest variety of fields. Studies dealing with algorithm development and software solutions for the generation and analysis of depth data (including data fusion, 3D scene reconstruction, biometrics, reverse engineering) and benchmarks are encouraged.

Prof. Dr. Lapo Governi
Dr. Francesco Buonamici
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Depth sensing technologies and techniques
  • Depth data analysis and processing
  • Depth data fusion
  • Depth sensors architecture
  • 3D reconstruction and shape retrieval
  • Depth sensing for human-machine interaction
  • Depth sensing for biomedicine
  • Depth sensing for industrial engineering
  • Other depth sensing applications
  • Device characterization

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1927 KiB  
Article
Enhancing the Tracking of Seedling Growth Using RGB-Depth Fusion and Deep Learning
by Hadhami Garbouge, Pejman Rasti and David Rousseau
Sensors 2021, 21(24), 8425; https://0-doi-org.brum.beds.ac.uk/10.3390/s21248425 - 17 Dec 2021
Cited by 6 | Viewed by 2660
Abstract
The use of high-throughput phenotyping with imaging and machine learning to monitor seedling growth is a tough yet intriguing subject in plant research. This has been recently addressed with low-cost RGB imaging sensors and deep learning during day time. RGB-Depth imaging devices are [...] Read more.
The use of high-throughput phenotyping with imaging and machine learning to monitor seedling growth is a tough yet intriguing subject in plant research. This has been recently addressed with low-cost RGB imaging sensors and deep learning during day time. RGB-Depth imaging devices are also accessible at low-cost and this opens opportunities to extend the monitoring of seedling during days and nights. In this article, we investigate the added value to fuse RGB imaging with depth imaging for this task of seedling growth stage monitoring. We propose a deep learning architecture along with RGB-Depth fusion to categorize the three first stages of seedling growth. Results show an average performance improvement of 5% correct recognition rate by comparison with the sole use of RGB images during the day. The best performances are obtained with the early fusion of RGB and Depth. Also, Depth is shown to enable the detection of growth stage in the absence of the light. Full article
(This article belongs to the Special Issue Recent Advances in Depth Sensors and Applications)
Show Figures

Figure 1

19 pages, 5677 KiB  
Article
Metrological Characterization and Comparison of D415, D455, L515 RealSense Devices in the Close Range
by Michaela Servi, Elisa Mussi, Andrea Profili, Rocco Furferi, Yary Volpe, Lapo Governi and Francesco Buonamici
Sensors 2021, 21(22), 7770; https://0-doi-org.brum.beds.ac.uk/10.3390/s21227770 - 22 Nov 2021
Cited by 24 | Viewed by 3922
Abstract
RGB-D cameras are employed in several research fields and application scenarios. Choosing the most appropriate sensor has been made more difficult by the increasing offer of available products. Due to the novelty of RGB-D technologies, there was a lack of tools to measure [...] Read more.
RGB-D cameras are employed in several research fields and application scenarios. Choosing the most appropriate sensor has been made more difficult by the increasing offer of available products. Due to the novelty of RGB-D technologies, there was a lack of tools to measure and compare performances of this type of sensor from a metrological perspective. The recent ISO 10360-13:2021 represents the most advanced international standard regulating metrological characterization of coordinate measuring systems. Part 13, specifically, considers 3D optical sensors. This paper applies the methodology of ISO 10360-13 for the characterization and comparison of three RGB-D cameras produced by Intel® RealSense™ (D415, D455, L515) in the close range (100–1500 mm). ISO 10360-13 procedures, which focus on metrological performances, are integrated with additional tests to evaluate systematic errors (acquisition of flat objects, 3D reconstruction of objects). The present paper proposes an off-the-shelf comparison which considers the performance of the sensors throughout their acquisition volume. Results have exposed the strengths and weaknesses of each device. The D415 device showed better reconstruction quality on tests strictly related to the short range. The L515 device performed better on systematic depth errors; finally, the D455 device achieved better results on tests related to the standard. Full article
(This article belongs to the Special Issue Recent Advances in Depth Sensors and Applications)
Show Figures

Figure 1

18 pages, 10173 KiB  
Article
AdjustSense: Adaptive 3D Sensing System with Adjustable Spatio-Temporal Resolution and Measurement Range Using High-Speed Omnidirectional Camera and Direct Drive Motor
by Mikihiro Ikura, Sarthak Pathak, Jun Younes Louhi Kasahara, Atsushi Yamashita and Hajime Asama
Sensors 2021, 21(21), 6975; https://0-doi-org.brum.beds.ac.uk/10.3390/s21216975 - 21 Oct 2021
Cited by 1 | Viewed by 1812
Abstract
Many types of 3D sensing devices are commercially available and were utilized in various technical fields. In most conventional systems with a 3D sensing device, the spatio-temporal resolution and the measurement range are constant during operation. Consequently, it is necessary to select an [...] Read more.
Many types of 3D sensing devices are commercially available and were utilized in various technical fields. In most conventional systems with a 3D sensing device, the spatio-temporal resolution and the measurement range are constant during operation. Consequently, it is necessary to select an appropriate sensing system according to the measurement task. Moreover, such conventional systems have difficulties dealing with several measurement targets simultaneously due to the aforementioned constants. This issue can hardly be solved by integrating several individual sensing systems into one. Here, we propose a single 3D sensing system that adaptively adjusts the spatio-temporal resolution and the measurement range to switch between multiple measurement tasks. We named the proposed adaptive 3D sensing system “AdjustSense.” In AdjustSense, as a means for the adaptive adjustment of the spatio-temporal resolution and measurement range, we aimed to achieve low-latency visual feedback for the adjustment by integrating not only a high-speed camera, which is a high-speed sensor, but also a direct drive motor, which is a high-speed actuator. This low-latency visual feedback can enable a large range of 3D sensing tasks simultaneously. We demonstrated the behavior of AdjustSense when the positions of the measured targets in the surroundings were changed. Furthermore, we quantitatively evaluated the spatio-temporal resolution and measurement range from the 3D points obtained. Through two experiments, we showed that AdjustSense could realize multiple measurement tasks: 360 3D sensing, 3D sensing at a high spatial resolution around multiple targets, and local 3D sensing at a high spatio-temporal resolution around a single object. Full article
(This article belongs to the Special Issue Recent Advances in Depth Sensors and Applications)
Show Figures

Figure 1

18 pages, 7968 KiB  
Article
Real-Time FPGA Accelerated Stereo Matching for Temporal Statistical Pattern Projector Systems
by Zan Brus, Marko Kos, Matic Erker and Iztok Kramberger
Sensors 2021, 21(19), 6435; https://0-doi-org.brum.beds.ac.uk/10.3390/s21196435 - 26 Sep 2021
Cited by 1 | Viewed by 2077
Abstract
The presented paper describes a hardware-accelerated field programmable gate array (FPGA)–based solution capable of real-time stereo matching for temporal statistical pattern projector systems. Modern 3D measurement systems have seen an increased use of temporal statistical pattern projectors as their active illumination source. The [...] Read more.
The presented paper describes a hardware-accelerated field programmable gate array (FPGA)–based solution capable of real-time stereo matching for temporal statistical pattern projector systems. Modern 3D measurement systems have seen an increased use of temporal statistical pattern projectors as their active illumination source. The use of temporal statistical patterns in stereo vision systems includes the advantage of not requiring information about pattern characteristics, enabling a simplified projector design. Stereo-matching algorithms used in such systems rely on the locally unique temporal changes in brightness to establish a pixel correspondence between the stereo image pair. Finding the temporal correspondence between individual pixels in temporal image pairs is computationally expensive, requiring GPU-based solutions to achieve real-time calculation. By leveraging a high-level synthesis approach, matching cost simplification, and FPGA-specific design optimizations, an energy-efficient, high throughput stereo-matching solution was developed. The design is capable of calculating disparity images on a 1024 × 1024(@291 FPS) input image pair stream at 8.1 W on an embedded FPGA platform (ZC706). Several different design configurations were tested, evaluating device utilization, throughput, power consumption, and performance-per-watt. The average performance-per-watt of the FPGA solution was two times higher than in a GPU-based solution. Full article
(This article belongs to the Special Issue Recent Advances in Depth Sensors and Applications)
Show Figures

Figure 1

20 pages, 3724 KiB  
Article
Measurement Noise Model for Depth Camera-Based People Tracking
by Otto Korkalo and Tapio Takala
Sensors 2021, 21(13), 4488; https://0-doi-org.brum.beds.ac.uk/10.3390/s21134488 - 30 Jun 2021
Cited by 3 | Viewed by 2572
Abstract
Depth cameras are widely used in people tracking applications. They typically suffer from significant range measurement noise, which causes uncertainty in the detections made of the people. The data fusion, state estimation and data association tasks require that the measurement uncertainty is modelled, [...] Read more.
Depth cameras are widely used in people tracking applications. They typically suffer from significant range measurement noise, which causes uncertainty in the detections made of the people. The data fusion, state estimation and data association tasks require that the measurement uncertainty is modelled, especially in multi-sensor systems. Measurement noise models for different kinds of depth sensors have been proposed, however, the existing approaches require manual calibration procedures which can be impractical to conduct in real-life scenarios. In this paper, we present a new measurement noise model for depth camera-based people tracking. In our tracking solution, we utilise the so-called plan-view approach, where the 3D measurements are transformed to the floor plane, and the tracking problem is solved in 2D. We directly model the measurement noise in the plan-view domain, and the errors that originate from the imaging process and the geometric transformations of the 3D data are combined. We also present a method for directly defining the noise models from the observations. Together with our depth sensor network self-calibration routine, the approach allows fast and practical deployment of depth-based people tracking systems. Full article
(This article belongs to the Special Issue Recent Advances in Depth Sensors and Applications)
Show Figures

Figure 1

25 pages, 6406 KiB  
Article
3D Reconstruction of Non-Rigid Plants and Sensor Data Fusion for Agriculture Phenotyping
by Gustavo Scalabrini Sampaio, Leandro A. Silva and Maurício Marengoni
Sensors 2021, 21(12), 4115; https://0-doi-org.brum.beds.ac.uk/10.3390/s21124115 - 15 Jun 2021
Cited by 10 | Viewed by 3427
Abstract
Technology has been promoting a great transformation in farming. The introduction of robotics; the use of sensors in the field; and the advances in computer vision; allow new systems to be developed to assist processes, such as phenotyping, of crop’s life cycle monitoring. [...] Read more.
Technology has been promoting a great transformation in farming. The introduction of robotics; the use of sensors in the field; and the advances in computer vision; allow new systems to be developed to assist processes, such as phenotyping, of crop’s life cycle monitoring. This work presents, which we believe to be the first time, a system capable of generating 3D models of non-rigid corn plants, which can be used as a tool in the phenotyping process. The system is composed by two modules: an terrestrial acquisition module and a processing module. The terrestrial acquisition module is composed by a robot, equipped with an RGB-D camera and three sets of temperature, humidity, and luminosity sensors, that collects data in the field. The processing module conducts the non-rigid 3D plants reconstruction and merges the sensor data into these models. The work presented here also shows a novel technique for background removal in depth images, as well as efficient techniques for processing these images and the sensor data. Experiments have shown that from the models generated and the data collected, plant structural measurements can be performed accurately and the plant’s environment can be mapped, allowing the plant’s health to be evaluated and providing greater crop efficiency. Full article
(This article belongs to the Special Issue Recent Advances in Depth Sensors and Applications)
Show Figures

Figure 1

20 pages, 7377 KiB  
Article
Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model
by Luigi Ariano, Claudio Ferrari, Stefano Berretti and Alberto Del Bimbo
Sensors 2021, 21(2), 589; https://0-doi-org.brum.beds.ac.uk/10.3390/s21020589 - 15 Jan 2021
Cited by 7 | Viewed by 2519
Abstract
Facial Action Units (AUs) correspond to the deformation/contraction of individual facial muscles or their combinations. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Generating and analyzing AUs in 3D is particularly [...] Read more.
Facial Action Units (AUs) correspond to the deformation/contraction of individual facial muscles or their combinations. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Generating and analyzing AUs in 3D is particularly relevant for the potential applications it can enable. In this paper, we propose a solution for 3D AU detection and synthesis by developing on a newly defined 3D Morphable Model (3DMM) of the face. Differently from most of the 3DMMs existing in the literature, which mainly model global variations of the face and show limitations in adapting to local and asymmetric deformations, the proposed solution is specifically devised to cope with such difficult morphings. During a training phase, the deformation coefficients are learned that enable the 3DMM to deform to 3D target scans showing neutral and facial expression of the same individual, thus decoupling expression from identity deformations. Then, such deformation coefficients are used, on the one hand, to train an AU classifier, on the other, they can be applied to a 3D neutral scan to generate AU deformations in a subject-independent manner. The proposed approach for AU detection is validated on the Bosphorus dataset, reporting competitive results with respect to the state-of-the-art, even in a challenging cross-dataset setting. We further show the learned coefficients are general enough to synthesize realistic 3D face instances with AUs activation. Full article
(This article belongs to the Special Issue Recent Advances in Depth Sensors and Applications)
Show Figures

Figure 1

13 pages, 1582 KiB  
Article
Single-Shot 3D Shape Reconstruction Using Structured Light and Deep Convolutional Neural Networks
by Hieu Nguyen, Yuzeng Wang and Zhaoyang Wang
Sensors 2020, 20(13), 3718; https://0-doi-org.brum.beds.ac.uk/10.3390/s20133718 - 03 Jul 2020
Cited by 76 | Viewed by 7734
Abstract
Single-shot 3D imaging and shape reconstruction has seen a surge of interest due to the ever-increasing evolution in sensing technologies. In this paper, a robust single-shot 3D shape reconstruction technique integrating the structured light technique with the deep convolutional neural networks (CNNs) is [...] Read more.
Single-shot 3D imaging and shape reconstruction has seen a surge of interest due to the ever-increasing evolution in sensing technologies. In this paper, a robust single-shot 3D shape reconstruction technique integrating the structured light technique with the deep convolutional neural networks (CNNs) is proposed. The input of the technique is a single fringe-pattern image, and the output is the corresponding depth map for 3D shape reconstruction. The essential training and validation datasets with high-quality 3D ground-truth labels are prepared by using a multi-frequency fringe projection profilometry technique. Unlike the conventional 3D shape reconstruction methods which involve complex algorithms and intensive computation to determine phase distributions or pixel disparities as well as depth map, the proposed approach uses an end-to-end network architecture to directly carry out the transformation of a 2D image to its corresponding 3D depth map without extra processing. In the approach, three CNN-based models are adopted for comparison. Furthermore, an accurate structured-light-based 3D imaging dataset used in this paper is made publicly available. Experiments have been conducted to demonstrate the validity and robustness of the proposed technique. It is capable of satisfying various 3D shape reconstruction demands in scientific research and engineering applications. Full article
(This article belongs to the Special Issue Recent Advances in Depth Sensors and Applications)
Show Figures

Figure 1

Back to TopTop