3D and Multimodal Image Acquisition Methods

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Image and Video Processing".

Deadline for manuscript submissions: closed (31 October 2020) | Viewed by 7373

Special Issue Editor


E-Mail Website
Guest Editor
Group for Quality Assurance and Image Processing, Technical University Ilmenau, 98684 Ilmenau, Germany
Interests: 3D sensors; structured light; robot vision; multimodal/multispectral imaging; human–machine interaction; high-resolution surface and shape measuring methods; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

3d and multimodal imaging comprises the acquisition of a scene simultaneously with a 3d sensor system and cameras at different spectral ranges giving a variety of image modalities. Multimodal imaging refers to the simultaneous production of signals for more than one imaging technique. As a result, the object is described by its spatial 3d coordinates (point clouds), its temporal behavior, and, in addition, by further image modalities (for example, thermal image, multi-spectral image, and polarization image). This type of imaging is gaining more and more importance in a variety of applications. This includes, for example, applications in medicine, such as cancer detection and surgical robotics, medical diagnostics, e.g., contactless heart rate monitoring, biomedical application, precision agriculture, e.g., recognition of fruits and their automatic harvest, for autonomous systems (for fast object recognition), in forestry, robotics, optical sorting or food industry, to name but a few.

This is supported by the dynamic development of 3d sensors as well as cameras in different spectral ranges. In addition to camera systems in the visual and near-infrared range, this includes in particular cameras in the short-wave infrared (SWIR), thermal (FIR), and multispectral up to polarization cameras.

The rapid increase in the number of application areas requires the development of real-time 3d and multimodal image acquisition techniques. This enables direct process feedback or control of autonomous systems. Besides the actual system development, this includes, e.g., multi-camera arrangements, multi-aperture systems, new methods of system calibration up to data evaluation to enable a pixel-accurate superimposition of the image information. Furthermore, the data evaluation of multimodal image data streams (e.g., by means of CNN's) or derivation of novel segmentation methods for an adapted image data reduction plays an important role.

We are looking forward to contributions in which technical, methodological, and algorithmic approaches are presented that may contribute to the future development of 3d and multimodal imaging techniques. This is not limited to special application areas.

Prof. Dr. Gunther Notni
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • real-time 3d sensors
  • multimodal imaging systems
  • multispectral cameras
  • polarization cameras
  • multi-aperture cameras
  • multimodal imaging systems for medicine, biomedical application, human–machine interaction, agriculture, forestry, production, robotics, and more
  • calibration techniques of multimodal imaging techniques
  • data analysis in multimodal imaging
  • deep learning/CNN´s in multimodal imaging

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 10501 KiB  
Article
Enhanced Contactless Vital Sign Estimation from Real-Time Multimodal 3D Image Data
by Chen Zhang, Ingo Gebhart, Peter Kühmstedt, Maik Rosenberger and Gunther Notni
J. Imaging 2020, 6(11), 123; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6110123 - 12 Nov 2020
Cited by 15 | Viewed by 2473
Abstract
The contactless estimation of vital signs using conventional color cameras and ambient light can be affected by motion artifacts and changes in ambient light. On both these problems, a multimodal 3D imaging system with an irritation-free controlled illumination was developed in this work. [...] Read more.
The contactless estimation of vital signs using conventional color cameras and ambient light can be affected by motion artifacts and changes in ambient light. On both these problems, a multimodal 3D imaging system with an irritation-free controlled illumination was developed in this work. In this system, real-time 3D imaging was combined with multispectral and thermal imaging. Based on 3D image data, an efficient method was developed for the compensation of head motions, and novel approaches based on the use of 3D regions of interest were proposed for the estimation of various vital signs from multispectral and thermal video data. The developed imaging system and algorithms were demonstrated with test subjects, delivering a proof-of-concept. Full article
(This article belongs to the Special Issue 3D and Multimodal Image Acquisition Methods)
Show Figures

Figure 1

14 pages, 6189 KiB  
Article
body2vec: 3D Point Cloud Reconstruction for Precise Anthropometry with Handheld Devices
by Magda Alexandra Trujillo-Jiménez, Pablo Navarro, Bruno Pazos, Leonardo Morales, Virginia Ramallo, Carolina Paschetta, Soledad De Azevedo, Anahí Ruderman, Orlando Pérez, Claudio Delrieux and Rolando Gonzalez-José
J. Imaging 2020, 6(9), 94; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6090094 - 11 Sep 2020
Cited by 11 | Viewed by 4291
Abstract
Current point cloud extraction methods based on photogrammetry generate large amounts of spurious detections that hamper useful 3D mesh reconstructions or, even worse, the possibility of adequate measurements. Moreover, noise removal methods for point clouds are complex, slow and incapable to cope with [...] Read more.
Current point cloud extraction methods based on photogrammetry generate large amounts of spurious detections that hamper useful 3D mesh reconstructions or, even worse, the possibility of adequate measurements. Moreover, noise removal methods for point clouds are complex, slow and incapable to cope with semantic noise. In this work, we present body2vec, a model-based body segmentation tool that uses a specifically trained Neural Network architecture. Body2vec is capable to perform human body point cloud reconstruction from videos taken on hand-held devices (smartphones or tablets), achieving high quality anthropometric measurements. The main contribution of the proposed workflow is to perform a background removal step, thus avoiding the spurious points generation that is usual in photogrammetric reconstruction. A group of 60 persons were taped with a smartphone, and the corresponding point clouds were obtained automatically with standard photogrammetric methods. We used as a 3D silver standard the clean meshes obtained at the same time with LiDAR sensors post-processed and noise-filtered by expert anthropological biologists. Finally, we used as gold standard anthropometric measurements of the waist and hip of the same people, taken by expert anthropometrists. Applying our method to the raw videos significantly enhanced the quality of the results of the point cloud as compared with the LiDAR-based mesh, and of the anthropometric measurements as compared with the actual hip and waist perimeter measured by the anthropometrists. In both contexts, the resulting quality of body2vec is equivalent to the LiDAR reconstruction. Full article
(This article belongs to the Special Issue 3D and Multimodal Image Acquisition Methods)
Show Figures

Figure 1

Back to TopTop