sensors-logo

Journal Browser

Journal Browser

Sensing and Processing for 3D Computer Vision

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (30 November 2021) | Viewed by 30016

Special Issue Editor


E-Mail Website
Collection Editor
Computer Vision and Systems Laboratory, Laval University, 1665 Rue de l’Universite, Universite Laval, Quebec City, QC G1V 0A6, Canada
Interests: 3D sensors; active vision; 3D image processing and understanding; modelling; geometry; 3D sensing and modelling for augmented and virtual reality; applications of 3D computer vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue is targetting the submission of research articles on 3D sensing technology and the use of advanced 3D sensors in computer vision. Original contributions on novel active 3D sensors, stereo reconstruction approaches and sensor calibration techniques are solicited. Articles on 3D point cloud/mesh processing, geometric modelling, shape representation and recognition are also of interest for this Special Issue. Articles on the application of 3D sensing and modelling to metrology, industrial inspection and quality control, augmented/virtual reality, heritage preservation, arts and other fields are also welcome.

Prof. Dr. Denis Laurendeau
Collection Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Active/passive 3D sensors
  • Sensor calibration
  • Stereo reconstruction
  • Point cloud/mesh processing
  • Geometry
  • Modelling and representation
  • Shape analysis and recognition
  • Applications of 3D vision

Related Special Issue

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:
20 pages, 3348 KiB  
Article
Classification of Cracks in Composite Structures Subjected to Low-Velocity Impact Using Distribution-Based Segmentation and Wavelet Analysis of X-ray Tomograms
by Angelika Wronkowicz-Katunin, Andrzej Katunin, Marko Nagode and Jernej Klemenc
Sensors 2021, 21(24), 8342; https://0-doi-org.brum.beds.ac.uk/10.3390/s21248342 - 14 Dec 2021
Cited by 4 | Viewed by 2145
Abstract
The problem of characterizing the structural residual life is one of the most challenging issues of the damage tolerance concept currently applied in modern aviation. Considering the complexity of the internal architecture of composite structures widely applied for aircraft components nowadays, as well [...] Read more.
The problem of characterizing the structural residual life is one of the most challenging issues of the damage tolerance concept currently applied in modern aviation. Considering the complexity of the internal architecture of composite structures widely applied for aircraft components nowadays, as well as the additional complexity related to the appearance of barely visible impact damage, prediction of the structural residual life is a demanding task. In this paper, the authors proposed a method based on detection of structural damage after low-velocity impact loading and its classification with respect to types of acting stress on constituents of composite structures using the developed processing algorithm based on segmentation of 3D X-ray computed tomograms using the rebmix package, real-oriented dual-tree wavelet transform and supporting image processing procedures. The presented algorithm allowed for accurate distinguishing of defined types of damage from X-ray computed tomograms with strong robustness to noise and measurement artifacts. The processing was performed on experimental data obtained from X-ray computed tomography of a composite structure with barely visible impact damage, which allowed better understanding of fracture mechanisms in such conditions. The gained knowledge will allow for a more accurate simulation of structural damage in composite structures, which will provide higher accuracy in predicting structural residual life. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision)
Show Figures

Figure 1

13 pages, 3112 KiB  
Article
SAP-Net: A Simple and Robust 3D Point Cloud Registration Network Based on Local Shape Features
by Jinlong Li, Yuntao Li, Jiang Long, Yu Zhang and Xiaorong Gao
Sensors 2021, 21(21), 7177; https://0-doi-org.brum.beds.ac.uk/10.3390/s21217177 - 28 Oct 2021
Cited by 5 | Viewed by 1907
Abstract
Point cloud registration is a key step in the reconstruction of 3D data models. The traditional ICP registration algorithm depends on the initial position of the point cloud. Otherwise, it may get trapped into local optima. In addition, the registration method based on [...] Read more.
Point cloud registration is a key step in the reconstruction of 3D data models. The traditional ICP registration algorithm depends on the initial position of the point cloud. Otherwise, it may get trapped into local optima. In addition, the registration method based on the feature learning of PointNet cannot directly or effectively extract local features. To solve these two problems, this paper proposes SAP-Net, inspired by CorsNet and PointNet++, as an optimized CorsNet. To be more specific, SAP-Net firstly uses the set abstraction layer in PointNet++ as the feature extraction layer and then combines the global features with the initial template point cloud. Finally, PointNet is used as the transform prediction layer to obtain the six parameters required for point cloud registration directly, namely the rotation matrix and the translation vector. Experiments on the ModelNet40 dataset and real data show that SAP-Net not only outperforms ICP and CorsNet on both seen and unseen categories of the point cloud but also has stronger robustness. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision)
Show Figures

Figure 1

27 pages, 25484 KiB  
Article
Visual Browse and Exploration in Motion Capture Data with Phylogenetic Tree of Context-Aware Poses
by Songle Chen, Xuejian Zhao, Bingqing Luo and Zhixin Sun
Sensors 2020, 20(18), 5224; https://0-doi-org.brum.beds.ac.uk/10.3390/s20185224 - 13 Sep 2020
Cited by 1 | Viewed by 2269
Abstract
Visual browse and exploration in motion capture data take resource acquisition as a human–computer interaction problem, and it is an essential approach for target motion search. This paper presents a progressive schema which starts from pose browse, then locates the interesting region and [...] Read more.
Visual browse and exploration in motion capture data take resource acquisition as a human–computer interaction problem, and it is an essential approach for target motion search. This paper presents a progressive schema which starts from pose browse, then locates the interesting region and then switches to online relevant motion exploration. It mainly addresses three core issues. First, to alleviate the contradiction between the limited visual space and ever-increasing size of real-world database, it applies affinity propagation to numerical similarity measure of pose to perform data abstraction and obtains representative poses of clusters. Second, to construct a meaningful neighborhood for user browsing, it further merges logical similarity measures of pose with the weight quartets and casts the isolated representative poses into a structure of phylogenetic tree. Third, to support online motion exploration including motion ranking and clustering, a biLSTM-based auto-encoder is proposed to encode the high-dimensional pose context into compact latent space. Experimental results on CMU’s motion capture data verify the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision)
Show Figures

Figure 1

22 pages, 5577 KiB  
Article
Probabilistic Modeling of Motion Blur for Time-of-Flight Sensors
by Bryan Rodriguez, Xinxiang Zhang and Dinesh Rajan
Sensors 2022, 22(3), 1182; https://0-doi-org.brum.beds.ac.uk/10.3390/s22031182 - 04 Feb 2022
Cited by 5 | Viewed by 2122
Abstract
Synthetically creating motion blur in two-dimensional (2D) images is a well-understood process and has been used in image processing for developing deblurring systems. There are no well-established techniques for synthetically generating arbitrary motion blur within three-dimensional (3D) images, such as depth maps and [...] Read more.
Synthetically creating motion blur in two-dimensional (2D) images is a well-understood process and has been used in image processing for developing deblurring systems. There are no well-established techniques for synthetically generating arbitrary motion blur within three-dimensional (3D) images, such as depth maps and point clouds since their behavior is not as well understood. As a prerequisite, we have previously developed a method for generating synthetic motion blur in a plane that is parallel to the sensor detector plane. In this work, as a major extension, we generalize our previously developed framework for synthetically generating linear and radial motion blur along planes that are at arbitrary angles with respect to the sensor detector plane. Our framework accurately captures the behavior of the real motion blur that is encountered using a Time-of-Flight (ToF) sensor. This work uses a probabilistic model that predicts the location of invalid pixels that are typically present within depth maps that contain real motion blur. More specifically, the probabilistic model considers different angles of motion paths and the velocity of an object with respect to the image plane of a ToF sensor. Extensive experimental results are shown that demonstrate how our framework can be applied to synthetically create radial, linear, and combined radial-linear motion blur. We quantify the accuracy of the synthetic generation method by comparing the resulting synthetic depth map to the experimentally captured depth map with motion. Our results indicate that our framework achieves an average Boundary F1 (BF) score of 0.7192 for invalid pixels for synthetic radial motion blur, an average BF score of 0.8778 for synthetic linear motion blur, and an average BF score of 0.62 for synthetic combined radial-linear motion blur. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision)
Show Figures

Figure 1

15 pages, 3891 KiB  
Article
Nonlinear Optimization of Light Field Point Cloud
by Yuriy Anisimov, Jason Raphael Rambach and Didier Stricker
Sensors 2022, 22(3), 814; https://0-doi-org.brum.beds.ac.uk/10.3390/s22030814 - 21 Jan 2022
Cited by 1 | Viewed by 2066
Abstract
The problem of accurate three-dimensional reconstruction is important for many research and industrial applications. Light field depth estimation utilizes many observations of the scene and hence can provide accurate reconstruction. We present a method, which enhances existing reconstruction algorithm with per-layer disparity filtering [...] Read more.
The problem of accurate three-dimensional reconstruction is important for many research and industrial applications. Light field depth estimation utilizes many observations of the scene and hence can provide accurate reconstruction. We present a method, which enhances existing reconstruction algorithm with per-layer disparity filtering and consistency-based holes filling. Together with that we reformulate the reconstruction result to a form of point cloud from different light field viewpoints and propose a non-linear optimization of it. The capability of our method to reconstruct scenes with acceptable quality was verified by evaluation on a publicly available dataset. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision)
Show Figures

Figure 1

17 pages, 12998 KiB  
Article
Multiple Cylinder Extraction from Organized Point Clouds
by Saed Moradi, Denis Laurendeau and Clement Gosselin
Sensors 2021, 21(22), 7630; https://0-doi-org.brum.beds.ac.uk/10.3390/s21227630 - 17 Nov 2021
Cited by 1 | Viewed by 2668
Abstract
Most man-made objects are composed of a few basic geometric primitives (GPs) such as spheres, cylinders, planes, ellipsoids, or cones. Thus, the object recognition problem can be considered as one of geometric primitives extraction. Among the different geometric primitives, cylinders are the most [...] Read more.
Most man-made objects are composed of a few basic geometric primitives (GPs) such as spheres, cylinders, planes, ellipsoids, or cones. Thus, the object recognition problem can be considered as one of geometric primitives extraction. Among the different geometric primitives, cylinders are the most frequently used GPs in real-world scenes. Therefore, cylinder detection and extraction are of great importance in 3D computer vision. Despite the rapid progress of cylinder detection algorithms, there are still two open problems in this area. First, a robust strategy is needed for the initial sample selection component of the cylinder extraction module. Second, detecting multiple cylinders simultaneously has not yet been investigated in depth. In this paper, a robust solution is provided to address these problems. The proposed solution is divided into three sub-modules. The first sub-module is a fast and accurate normal vector estimation algorithm from raw depth images. With the estimation method, a closed-form solution is provided for computing the normal vector at each point. The second sub-module benefits from the maximally stable extremal regions (MSER) feature detector to simultaneously detect cylinders present in the scene. Finally, the detected cylinders are extracted using the proposed cylinder extraction algorithm. Quantitative and qualitative results show that the proposed algorithm outperforms the baseline algorithms in each of the following areas: normal estimation, cylinder detection, and cylinder extraction. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision)
Show Figures

Figure 1

14 pages, 4044 KiB  
Letter
High-Speed Measurement of Shape and Vibration: Whole-Field Systems for Motion Capture and Vibration Modal Analysis by OPPA Method
by Yoshiharu Morimoto
Sensors 2020, 20(15), 4263; https://0-doi-org.brum.beds.ac.uk/10.3390/s20154263 - 30 Jul 2020
Cited by 8 | Viewed by 3385
Abstract
In shape measurement systems using a grating projection method, the phase analysis of a projected grating provides accurate results. The most popular phase analysis method is the phase shifting method, which requires several images for one shape analysis. Therefore, the object must not [...] Read more.
In shape measurement systems using a grating projection method, the phase analysis of a projected grating provides accurate results. The most popular phase analysis method is the phase shifting method, which requires several images for one shape analysis. Therefore, the object must not move during the measurement. The authors previously proposed a new accurate and high-speed shape measurement method, i.e., the one-pitch phase analysis (OPPA) method, which can determine the phase at every point of a single image of an object with a grating projected onto it. In the OPPA optical system, regardless of the distance of the object from the camera, the one-pitch length (number of pixels) on the imaging surface of the camera sensor is always constant. Therefore, brightness data for one pitch at any point of the image can be easily analyzed to determine phase distribution, or shape. This technology will apply to the measurement of objects in motion, including automobiles, robot arms, products on a conveyor belt, and vibrating objects. This paper describes the principle of the OPPA method and example applications for real-time human motion capture and modal analysis of free vibration of a flat cantilever plate after hammering. The results show the usefulness of the OPPA method. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision)
Show Figures

Figure 1

45 pages, 9677 KiB  
Article
Calibration of Stereo Pairs Using Speckle Metrology
by Éric Samson, Denis Laurendeau and Marc Parizeau
Sensors 2022, 22(5), 1784; https://0-doi-org.brum.beds.ac.uk/10.3390/s22051784 - 24 Feb 2022
Viewed by 1563
Abstract
The accuracy of 3D reconstruction for metrology applications using active stereo pairs depends on the quality of the calibration of the system. Active stereo pairs are generally composed of cameras mounted on tilt/pan mechanisms separated by a constant or variable baseline. This paper [...] Read more.
The accuracy of 3D reconstruction for metrology applications using active stereo pairs depends on the quality of the calibration of the system. Active stereo pairs are generally composed of cameras mounted on tilt/pan mechanisms separated by a constant or variable baseline. This paper presents a calibration approach based on speckle metrology that allows the separation of translation and rotation in the estimation of extrinsic parameters. To achieve speckle-based calibration, a device called an Almost Punctual Speckle Source (APSS) is introduced. Using the APSS, a thorough method for the calibration of extrinsic parameters of stereo pairs is described. Experimental results obtained with a stereo system called the Agile Stereo Pair (ASP) demonstrate that speckle-based calibration achieves better reconstruction performance than methods using standard calibration procedures. Although the experiments were performed with a specific stereo pair, such as the ASP, which is described in the paper, the speckle-based calibration approach using the APSS can be transposed to other stereo setups. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision)
Show Figures

Figure 1

24 pages, 14291 KiB  
Article
Enhanced Soft 3D Reconstruction Method with an Iterative Matching Cost Update Using Object Surface Consensus
by Min-Jae Lee, Gi-Mun Um, Joungil Yun, Won-Sik Cheong and Soon-Yong Park
Sensors 2021, 21(19), 6680; https://0-doi-org.brum.beds.ac.uk/10.3390/s21196680 - 08 Oct 2021
Cited by 9 | Viewed by 2597
Abstract
In this paper, we propose a multi-view stereo matching method, EnSoft3D (Enhanced Soft 3D Reconstruction) to obtain dense and high-quality depth images. Multi-view stereo is one of the high-interest research areas and has wide applications. Motivated by the Soft3D reconstruction method, we introduce [...] Read more.
In this paper, we propose a multi-view stereo matching method, EnSoft3D (Enhanced Soft 3D Reconstruction) to obtain dense and high-quality depth images. Multi-view stereo is one of the high-interest research areas and has wide applications. Motivated by the Soft3D reconstruction method, we introduce a new multi-view stereo matching scheme. The original Soft3D method is introduced for novel view synthesis, while occlusion-aware depth is also reconstructed by integrating the matching costs of the Plane Sweep Stereo (PSS) and soft visibility volumes. However, the Soft3D method has an inherent limitation because the erroneous PSS matching costs are not updated. To overcome this limitation, the proposed scheme introduces an update process of the PSS matching costs. From the object surface consensus volume, an inverse consensus kernel is derived, and the PSS matching costs are iteratively updated using the kernel. The proposed EnSoft3D method reconstructs a highly accurate 3D depth image because both the multi-view matching cost and soft visibility are updated simultaneously. The performance of the proposed method is evaluated by using structured and unstructured benchmark datasets. Disparity error is measured to verify 3D reconstruction accuracy, and both PSNR and SSIM are measured to verify the simultaneous enhancement of view synthesis. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision)
Show Figures

Figure 1

26 pages, 52976 KiB  
Article
Quantitative 3D Reconstruction from Scanning Electron Microscope Images Based on Affine Camera Models
by Stefan Töberg and Eduard Reithmeier
Sensors 2020, 20(12), 3598; https://0-doi-org.brum.beds.ac.uk/10.3390/s20123598 - 26 Jun 2020
Cited by 1 | Viewed by 4233
Abstract
Scanning electron microscopes (SEMs) are versatile imaging devices for the micro- and nanoscale that find application in various disciplines such as the characterization of biological, mineral or mechanical specimen. Even though the specimen’s two-dimensional (2D) properties are provided by the acquired images, detailed [...] Read more.
Scanning electron microscopes (SEMs) are versatile imaging devices for the micro- and nanoscale that find application in various disciplines such as the characterization of biological, mineral or mechanical specimen. Even though the specimen’s two-dimensional (2D) properties are provided by the acquired images, detailed morphological characterizations require knowledge about the three-dimensional (3D) surface structure. To overcome this limitation, a reconstruction routine is presented that allows the quantitative depth reconstruction from SEM image sequences. Based on the SEM’s imaging properties that can be well described by an affine camera, the proposed algorithms rely on the use of affine epipolar geometry, self-calibration via factorization and triangulation from dense correspondences. To yield the highest robustness and accuracy, different sub-models of the affine camera are applied to the SEM images and the obtained results are directly compared to confocal laser scanning microscope (CLSM) measurements to identify the ideal parametrization and underlying algorithms. To solve the rectification problem for stereo-pair images of an affine camera so that dense matching algorithms can be applied, existing approaches are adapted and extended to further enhance the yielded results. The evaluations of this study allow to specify the applicability of the affine camera models to SEM images and what accuracies can be expected for reconstruction routines based on self-calibration and dense matching algorithms. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision)
Show Figures

Figure 1

13 pages, 2809 KiB  
Article
3D Transparent Object Detection and Reconstruction Based on Passive Mode Single-Pixel Imaging
by Anumol Mathai, Ningqun Guo, Dong Liu and Xin Wang
Sensors 2020, 20(15), 4211; https://0-doi-org.brum.beds.ac.uk/10.3390/s20154211 - 29 Jul 2020
Cited by 8 | Viewed by 3370
Abstract
Transparent object detection and reconstruction are significant, due to their practical applications. The appearance and characteristics of light in these objects make reconstruction methods tailored for Lambertian surfaces fail disgracefully. In this paper, we introduce a fixed multi-viewpoint approach to ascertain the shape [...] Read more.
Transparent object detection and reconstruction are significant, due to their practical applications. The appearance and characteristics of light in these objects make reconstruction methods tailored for Lambertian surfaces fail disgracefully. In this paper, we introduce a fixed multi-viewpoint approach to ascertain the shape of transparent objects, thereby avoiding the rotation or movement of the object during imaging. In addition, a simple and cost-effective experimental setup is presented, which employs two single-pixel detectors and a digital micromirror device, for imaging transparent objects by projecting binary patterns. In the system setup, a dark framework is implemented around the object, to create shades at the boundaries of the object. By triangulating the light path from the object, the surface shape is recovered, neither considering the reflections nor the number of refractions. It can, therefore, handle transparent objects with a relatively complex shape with the unknown refractive index. The implementation of compressive sensing in this technique further simplifies the acquisition process, by reducing the number of measurements. The experimental results show that 2D images obtained from the single-pixel detectors are better in quality with a resolution of 32×32. Additionally, the obtained disparity and error map indicate the feasibility and accuracy of the proposed method. This work provides a new insight into 3D transparent object detection and reconstruction, based on single-pixel imaging at an affordable cost, with the implementation of a few numbers of detectors. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision)
Show Figures

Figure 1

Back to TopTop