Next Issue
Volume 4, November
Previous Issue
Volume 4, September
 
 

J. Imaging, Volume 4, Issue 10 (October 2018) – 14 articles

Cover Story (view full-size image): This paper describes the idea of Encoder-decoder-based CNN for Road-Scene Understanding named ECRU. The proposed model offers a simplified CNN architecture with less overhead and higher performance. It makes use of the special method of re-using pooling indices, which leads to fewer computation parameters and helps to reduce inference time. The proposed network model is well suited for scene understanding applications. It could be employed for driving assistance to offer enhanced vehicle safety and more generally road safety. The network is trained and tested on the famous road scenes dataset CamVid and offers outstanding outcomes in comparison to similar previously published methods. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 8256 KiB  
Review
An Overview of Watershed Algorithm Implementations in Open Source Libraries
by Anton S. Kornilov and Ilia V. Safonov
J. Imaging 2018, 4(10), 123; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4100123 - 20 Oct 2018
Cited by 133 | Viewed by 13004
Abstract
Watershed is a widespread technique for image segmentation. Many researchers apply the method implemented in open source libraries without a deep understanding of its characteristics and limitations. In the review, we describe benchmarking outcomes of six open-source marker-controlled watershed implementations for the segmentation [...] Read more.
Watershed is a widespread technique for image segmentation. Many researchers apply the method implemented in open source libraries without a deep understanding of its characteristics and limitations. In the review, we describe benchmarking outcomes of six open-source marker-controlled watershed implementations for the segmentation of 2D and 3D images. Even though the considered solutions are based on the same algorithm by flooding having O(n)computational complexity, these implementations have significantly different performance. In addition, building of watershed lines grows processing time. High memory consumption is one more bottleneck for dealing with huge volumetric images. Sometimes, the usage of more optimal software is capable of mitigating the issues with the long processing time and insufficient memory space. We assume parallel processing is capable of overcoming the current limitations. However, the development of concurrent approaches for the watershed segmentation remains a challenging problem. Full article
Show Figures

Figure 1

17 pages, 1830 KiB  
Article
Accelerating SuperBE with Hardware/Software Co-Design
by Andrew Tzer-Yeu Chen, Rohaan Gupta, Anton Borzenko, Kevin I-Kai Wang and Morteza Biglari-Abhari
J. Imaging 2018, 4(10), 122; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4100122 - 18 Oct 2018
Cited by 5 | Viewed by 5400
Abstract
Background Estimation is a common computer vision task, used for segmenting moving objects in video streams. This can be useful as a pre-processing step, isolating regions of interest for more complicated algorithms performing detection, recognition, and identification tasks, in order to reduce overall [...] Read more.
Background Estimation is a common computer vision task, used for segmenting moving objects in video streams. This can be useful as a pre-processing step, isolating regions of interest for more complicated algorithms performing detection, recognition, and identification tasks, in order to reduce overall computation time. This is especially important in the context of embedded systems like smart cameras, which may need to process images with constrained computational resources. This work focuses on accelerating SuperBE, a superpixel-based background estimation algorithm that was designed for simplicity and reducing computational complexity while maintaining state-of-the-art levels of accuracy. We explore both software and hardware acceleration opportunities, converting the original algorithm into a greyscale, integer-only version, and using Hardware/Software Co-design to develop hardware acceleration components on FPGA fabric that assist a software processor. We achieved a 4.4× speed improvement with the software optimisations alone, and a 2× speed improvement with the hardware optimisations alone. When combined, these led to a 9× speed improvement on a Cyclone V System-on-Chip, delivering almost 38 fps on 320 × 240 resolution images. Full article
(This article belongs to the Special Issue Image Processing Using FPGAs)
Show Figures

Figure 1

12 pages, 1855 KiB  
Article
Signed Real-Time Delay Multiply and Sum Beamforming for Multispectral Photoacoustic Imaging
by Thomas Kirchner, Franz Sattler, Janek Gröhl and Lena Maier-Hein
J. Imaging 2018, 4(10), 121; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4100121 - 17 Oct 2018
Cited by 33 | Viewed by 5389
Abstract
Reconstruction of photoacoustic (PA) images acquired with clinical ultrasound transducers is usually performed using the Delay and Sum (DAS) beamforming algorithm. Recently, a variant of DAS, referred to as Delay Multiply and Sum (DMAS) beamforming has been shown to provide increased contrast, signal-to-noise [...] Read more.
Reconstruction of photoacoustic (PA) images acquired with clinical ultrasound transducers is usually performed using the Delay and Sum (DAS) beamforming algorithm. Recently, a variant of DAS, referred to as Delay Multiply and Sum (DMAS) beamforming has been shown to provide increased contrast, signal-to-noise ratio (SNR) and resolution in PA imaging. The main reasons for the use of DAS beamforming in photoacoustics are its simple implementation, real-time capability, and the linearity of the beamformed image to the PA signal. This is crucial for the identification of different chromophores in multispectral PA applications. In contrast, current DMAS implementations are not responsive to the full spectrum of sound frequencies from a photoacoustic source and have not been shown to provide a reconstruction linear to the PA signal. Furthermore, due to its increased computational complexity, DMAS has not been shown yet to work in real-time. Here, we present an open-source real-time variant of the DMAS algorithm, signed DMAS (sDMAS), that ensures linearity in the original PA signal response while providing the increased image quality of DMAS. We show the applicability of sDMAS for multispectral PA applications, in vitro and in vivo. The sDMAS and reference DAS algorithms were integrated in the open-source Medical Imaging Interaction Toolkit (MITK) and are available as real-time capable implementations. Full article
(This article belongs to the Special Issue Biomedical Photoacoustic Imaging: Technologies and Methods)
Show Figures

Figure 1

20 pages, 1416 KiB  
Article
In the Eye of the Deceiver: Analyzing Eye Movements as a Cue to Deception
by Diana Borza, Razvan Itu and Radu Danescu
J. Imaging 2018, 4(10), 120; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4100120 - 16 Oct 2018
Cited by 17 | Viewed by 7714
Abstract
Deceit occurs in daily life and, even from an early age, children can successfully deceive their parents. Therefore, numerous book and psychological studies have been published to help people decipher the facial cues to deceit. In this study, we tackle the problem of [...] Read more.
Deceit occurs in daily life and, even from an early age, children can successfully deceive their parents. Therefore, numerous book and psychological studies have been published to help people decipher the facial cues to deceit. In this study, we tackle the problem of deceit detection by analyzing eye movements: blinks, saccades and gaze direction. Recent psychological studies have shown that the non-visual saccadic eye movement rate is higher when people lie. We propose a fast and accurate framework for eye tracking and eye movement recognition and analysis. The proposed system tracks the position of the iris, as well as the eye corners (the outer shape of the eye). Next, in an offline analysis stage, the trajectory of these eye features is analyzed in order to recognize and measure various cues which can be used as an indicator of deception: the blink rate, the gaze direction and the saccadic eye movement rate. On the task of iris center localization, the method achieves within pupil localization in 91.47% of the cases. For blink localization, we obtained an accuracy of 99.3% on the difficult EyeBlink8 dataset. In addition, we proposed a novel metric, the normalized blink rate deviation to stop deceitful behavior based on blink rate. Using this metric and a simple decision stump, the deceitful answers from the Silesian Face database were recognized with an accuracy of 96.15%. Full article
Show Figures

Figure 1

13 pages, 1192 KiB  
Article
Objective Classes for Micro-Facial Expression Recognition
by Adrian K. Davison, Walied Merghani and Moi Hoon Yap
J. Imaging 2018, 4(10), 119; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4100119 - 15 Oct 2018
Cited by 73 | Viewed by 7632
Abstract
Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are [...] Read more.
Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP (Local Binary Patterns from Three Orthogonal Planes), HOOF (Histograms of Oriented Optical Flow) and HOG 3D (3D Histogram of Oriented Gradient) feature descriptors. The experiments are evaluated on two benchmark FACS (Facial Action Coding System) coded datasets: CASME II and SAMM (A Spontaneous Micro-Facial Movement). The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition. Full article
Show Figures

Figure 1

25 pages, 4646 KiB  
Article
Fusing Multiple Multiband Images
by Reza Arablouei
J. Imaging 2018, 4(10), 118; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4100118 - 12 Oct 2018
Cited by 9 | Viewed by 4355
Abstract
High-resolution hyperspectral images are in great demand but hard to acquire due to several existing fundamental and technical limitations. A practical way around this is to fuse multiple multiband images of the same scene with complementary spatial and spectral resolutions. We propose an [...] Read more.
High-resolution hyperspectral images are in great demand but hard to acquire due to several existing fundamental and technical limitations. A practical way around this is to fuse multiple multiband images of the same scene with complementary spatial and spectral resolutions. We propose an algorithm for fusing an arbitrary number of coregistered multiband, i.e., panchromatic, multispectral, or hyperspectral, images through estimating the endmember and their abundances in the fused image. To this end, we use the forward observation and linear mixture models and formulate an appropriate maximum-likelihood estimation problem. Then, we regularize the problem via a vector total-variation penalty and the non-negativity/sum-to-one constraints on the endmember abundances and solve it using the alternating direction method of multipliers. The regularization facilitates exploiting the prior knowledge that natural images are mostly composed of piecewise smooth regions with limited abrupt changes, i.e., edges, as well as coping with potential ill-posedness of the fusion problem. Experiments with multiband images constructed from real-world hyperspectral images reveal the superior performance of the proposed algorithm in comparison with the state-of-the-art algorithms, which need to be used in tandem to fuse more than two multiband images. Full article
(This article belongs to the Special Issue The Future of Hyperspectral Imaging)
Show Figures

Figure 1

18 pages, 2448 KiB  
Article
Multivariate Statistical Approach to Image Quality Tasks
by Praful Gupta, Christos G. Bampis, Jack L. Glover, Nicholas G. Paulter and Alan C. Bovik
J. Imaging 2018, 4(10), 117; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4100117 - 12 Oct 2018
Cited by 3 | Viewed by 3870
Abstract
Many existing natural scene statistics-based no reference image quality assessment (NR IQA) algorithms employ univariate parametric distributions to capture the statistical inconsistencies of bandpass distorted image coefficients. Here, we propose a multivariate model of natural image coefficients expressed in the bandpass spatial domain [...] Read more.
Many existing natural scene statistics-based no reference image quality assessment (NR IQA) algorithms employ univariate parametric distributions to capture the statistical inconsistencies of bandpass distorted image coefficients. Here, we propose a multivariate model of natural image coefficients expressed in the bandpass spatial domain that has the potential to capture higher order correlations that may be induced by the presence of distortions. We analyze how the parameters of the multivariate model are affected by different distortion types, and we show their ability to capture distortion-sensitive image quality information. We also demonstrate the violation of Gaussianity assumptions that occur when locally estimating the energies of distorted image coefficients. Thus, we propose a generalized Gaussian-based local contrast estimator as a way to implement non-linear local gain control, which facilitates the accurate modeling of both pristine and distorted images. We integrate the novel approach of generalized contrast normalization with multivariate modeling of bandpass image coefficients into a holistic NR IQA model, which we refer to as multivariate generalized contrast normalization (MVGCN). We demonstrate the improved performance of MVGCN on quality-relevant tasks on multiple imaging modalities, including visible light image quality prediction and task success prediction on distorted X-ray images. Full article
(This article belongs to the Special Issue Image Quality)
Show Figures

Figure 1

19 pages, 2763 KiB  
Article
ECRU: An Encoder-Decoder Based Convolution Neural Network (CNN) for Road-Scene Understanding
by Robail Yasrab
J. Imaging 2018, 4(10), 116; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4100116 - 08 Oct 2018
Cited by 17 | Viewed by 9715
Abstract
This research presents the idea of a novel fully-Convolutional Neural Network (CNN)-based model for probabilistic pixel-wise segmentation, titled Encoder-decoder-based CNN for Road-Scene Understanding (ECRU). Lately, scene understanding has become an evolving research area, and semantic segmentation is the most recent method for visual [...] Read more.
This research presents the idea of a novel fully-Convolutional Neural Network (CNN)-based model for probabilistic pixel-wise segmentation, titled Encoder-decoder-based CNN for Road-Scene Understanding (ECRU). Lately, scene understanding has become an evolving research area, and semantic segmentation is the most recent method for visual recognition. Among vision-based smart systems, the driving assistance system turns out to be a much preferred research topic. The proposed model is an encoder-decoder that performs pixel-wise class predictions. The encoder network is composed of a VGG-19 layer model, while the decoder network uses 16 upsampling and deconvolution units. The encoder of the network has a very flexible architecture that can be altered and trained for any size and resolution of images. The decoder network upsamples and maps the low-resolution encoder’s features. Consequently, there is a substantial reduction in the trainable parameters, as the network recycles the encoder’s pooling indices for pixel-wise classification and segmentation. The proposed model is intended to offer a simplified CNN model with less overhead and higher performance. The network is trained and tested on the famous road scenes dataset CamVid and offers outstanding outcomes in comparison to similar early approaches like FCN and VGG16 in terms of performance vs. trainable parameters. Full article
Show Figures

Graphical abstract

11 pages, 2700 KiB  
Article
A Non-Structural Representation Scheme for Articulated Shapes
by Asli Genctav and Sibel Tari
J. Imaging 2018, 4(10), 115; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4100115 - 08 Oct 2018
Viewed by 3057
Abstract
Articulated shapes are successfully represented by structural representations which are organized in the form of graphs of shape components. We present an alternative representation scheme which is equally powerful but does not require explicit modeling or discovery of structural relations. The key element [...] Read more.
Articulated shapes are successfully represented by structural representations which are organized in the form of graphs of shape components. We present an alternative representation scheme which is equally powerful but does not require explicit modeling or discovery of structural relations. The key element in our scheme is a novel multi scale pixel-based distinctness measure which implicitly quantifies how rare a particular pixel is in terms of its geometry with respect to all pixels of the shape. The spatial distribution of the distinctness yields a partitioning of the shape into a set of regions. The proposed representation is a collection of size normalized probability distribution of the distinctness over regions over shape dependent scales. We test the proposed representation on a clustering task. Full article
Show Figures

Figure 1

41 pages, 21219 KiB  
Article
On the Application LBP Texture Descriptors and Its Variants for No-Reference Image Quality Assessment
by Pedro Garcia Freitas, Luísa Peixoto Da Eira, Samuel Soares Santos and Mylene Christine Queiroz de Farias
J. Imaging 2018, 4(10), 114; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4100114 - 04 Oct 2018
Cited by 16 | Viewed by 7038
Abstract
Automatic assessing the quality of an image is a critical problem for a wide range of applications in the fields of computer vision and image processing. For example, many computer vision applications, such as biometric identification, content retrieval, and object recognition, rely on [...] Read more.
Automatic assessing the quality of an image is a critical problem for a wide range of applications in the fields of computer vision and image processing. For example, many computer vision applications, such as biometric identification, content retrieval, and object recognition, rely on input images with a specific range of quality. Therefore, an effort has been made to develop image quality assessment (IQA) methods that are able to automatically estimate quality. Among the possible IQA approaches, No-Reference IQA (NR-IQA) methods are of fundamental interest, since they can be used in most real-time multimedia applications. NR-IQA are capable of assessing the quality of an image without using the reference (or pristine) image. In this paper, we investigate the use of texture descriptors in the design of NR-IQA methods. The premise is that visible impairments alter the statistics of texture descriptors, making it possible to estimate quality. To investigate if this premise is valid, we analyze the use of a set of state-of-the-art Local Binary Patterns (LBP) texture descriptors in IQA methods. Particularly, we present a comprehensive review with a detailed description of the considered methods. Additionally, we propose a framework for using texture descriptors in NR-IQA methods. Our experimental results indicate that, although not all texture descriptors are suitable for NR-IQA, many can be used with this purpose achieving a good accuracy performance with the advantage of a low computational complexity. Full article
(This article belongs to the Special Issue Image Quality)
Show Figures

Figure 1

4 pages, 177 KiB  
Editorial
Phase-Contrast and Dark-Field Imaging
by Simon Zabler
J. Imaging 2018, 4(10), 113; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4100113 - 02 Oct 2018
Cited by 2 | Viewed by 3619
Abstract
Very early, in 1896, Wilhelm Conrad Röntgen, the founding father of X-rays, attempted to measure diffraction and refraction by this new kind of radiation, in vain. Only 70 years later, these effects were measured by Ulrich Bonse and Michael Hart who used them [...] Read more.
Very early, in 1896, Wilhelm Conrad Röntgen, the founding father of X-rays, attempted to measure diffraction and refraction by this new kind of radiation, in vain. Only 70 years later, these effects were measured by Ulrich Bonse and Michael Hart who used them to make full-field images of biological specimen, coining the term phase-contrast imaging. Yet, another 30 years passed until the Talbot effect was rediscovered for X-radiation, giving rise to a micrograting based interferometer, replacing the Bonse–Hart interferometer, which relied on a set of four Laue-crystals for beam splitting and interference. By merging the Lau-interferometer with this Talbot-interferometer, another ten years later, measuring X-ray refraction and X-ray scattering full-field and in cm-sized objects (as Röntgen had attempted 110 years earlier) became feasible in every X-ray laboratory around the world. Today, now that another twelve years have passed and we are approaching the 125th jubilee of Röntgen’s discovery, neither Laue-crystals nor microgratings are a necessity for sensing refraction and scattering by X-rays. Cardboard, steel wool, and sandpaper are sufficient for extracting these contrasts from transmission images, using the latest image reconstruction algorithms. This advancement and the ever rising number of applications for phase-contrast and dark-field imaging prove to what degree our understanding of imaging physics as well as signal processing have advanced since the advent of X-ray physics, in particular during the past two decades. The discovery of the electron, as well as the development of electron imaging technology, has accompanied X-ray physics closely along its path, both modalities exploring the applications of new dark-field contrast mechanisms these days. Materials science, life science, archeology, non-destructive testing, and medicine are the key faculties which have already integrated these new imaging devices, using their contrast mechanisms in full. This special issue “Phase-Contrast and Dark-field Imaging” gives us a broad yet very to-the-point glimpse of research and development which are currently taking place in this very active field. We find reviews, applications reports, and methodological papers of very high quality from various groups, most of which operate X-ray scanners which comprise these new imaging modalities. Full article
(This article belongs to the Special Issue Phase-Contrast and Dark-Field Imaging)
17 pages, 374 KiB  
Article
Unsupervised Local Binary Pattern Histogram Selection Scores for Color Texture Classification
by Mariam Kalakech, Alice Porebski, Nicolas Vandenbroucke and Denis Hamad
J. Imaging 2018, 4(10), 112; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4100112 - 28 Sep 2018
Cited by 7 | Viewed by 3962
Abstract
These last few years, several supervised scores have been proposed in the literature to select histograms. Applied to color texture classification problems, these scores have improved the accuracy by selecting the most discriminant histograms among a set of available ones computed from a [...] Read more.
These last few years, several supervised scores have been proposed in the literature to select histograms. Applied to color texture classification problems, these scores have improved the accuracy by selecting the most discriminant histograms among a set of available ones computed from a color image. In this paper, two new scores are proposed to select histograms: The adapted Variance score and the adapted Laplacian score. These new scores are computed without considering the class label of the images, contrary to what is done until now. Experiments, achieved on OuTex, USPTex, and BarkTex sets, show that these unsupervised scores give as good results as the supervised ones for LBP histogram selection. Full article
(This article belongs to the Special Issue Computational Colour Imaging)
Show Figures

Figure 1

24 pages, 527 KiB  
Article
GPU Acceleration of the Most Apparent Distortion Image Quality Assessment Algorithm
by Joshua Holloway, Vignesh Kannan, Yi Zhang, Damon M. Chandler and Sohum Sohoni
J. Imaging 2018, 4(10), 111; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4100111 - 25 Sep 2018
Cited by 2 | Viewed by 4170
Abstract
The primary function of multimedia systems is to seamlessly transform and display content to users while maintaining the perception of acceptable quality. For images and videos, perceptual quality assessment algorithms play an important role in determining what is acceptable quality and what is [...] Read more.
The primary function of multimedia systems is to seamlessly transform and display content to users while maintaining the perception of acceptable quality. For images and videos, perceptual quality assessment algorithms play an important role in determining what is acceptable quality and what is unacceptable from a human visual perspective. As modern image quality assessment (IQA) algorithms gain widespread adoption, it is important to achieve a balance between their computational efficiency and their quality prediction accuracy. One way to improve computational performance to meet real-time constraints is to use simplistic models of visual perception, but such an approach has a serious drawback in terms of poor-quality predictions and limited robustness to changing distortions and viewing conditions. In this paper, we investigate the advantages and potential bottlenecks of implementing a best-in-class IQA algorithm, Most Apparent Distortion, on graphics processing units (GPUs). Our results suggest that an understanding of the GPU and CPU architectures, combined with detailed knowledge of the IQA algorithm, can lead to non-trivial speedups without compromising prediction accuracy. A single-GPU and a multi-GPU implementation showed a 24× and a 33× speedup, respectively, over the baseline CPU implementation. A bottleneck analysis revealed the kernels with the highest runtimes, and a microarchitectural analysis illustrated the underlying reasons for the high runtimes of these kernels. Programs written with optimizations such as blocking that map well to CPU memory hierarchies do not map well to the GPU’s memory hierarchy. While compute unified device architecture (CUDA) is convenient to use and is powerful in facilitating general purpose GPU (GPGPU) programming, knowledge of how a program interacts with the underlying hardware is essential for understanding performance bottlenecks and resolving them. Full article
(This article belongs to the Special Issue Image Quality)
Show Figures

Figure 1

12 pages, 3041 KiB  
Article
Hyperspectral Imaging Using Laser Excitation for Fast Raman and Fluorescence Hyperspectral Imaging for Sorting and Quality Control Applications
by Florian Gruber, Philipp Wollmann, Wulf Grählert and Stefan Kaskel
J. Imaging 2018, 4(10), 110; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4100110 - 21 Sep 2018
Cited by 9 | Viewed by 7070
Abstract
A hyperspectral measurement system for the fast and large area measurement of Raman and fluorescence signals was developed, characterized and tested. This laser hyperspectral imaging system (Laser-HSI) can be used for sorting tasks and for continuous quality monitoring. The system uses a 532 [...] Read more.
A hyperspectral measurement system for the fast and large area measurement of Raman and fluorescence signals was developed, characterized and tested. This laser hyperspectral imaging system (Laser-HSI) can be used for sorting tasks and for continuous quality monitoring. The system uses a 532 nm Nd:YAG laser and a standard pushbroom HSI camera. Depending on the lens selected, it is possible to cover large areas (e.g., field of view (FOV) = 386 mm) or to achieve high spatial resolutions (e.g., 0.02 mm). The developed Laser-HSI was used for four exemplary experiments: (a) the measurement and classification of a mixture of sulphur and naphthalene; (b) the measurement of carotenoid distribution in a carrot slice; (c) the classification of black polymer particles; and, (d) the localization of impurities on a lead zirconate titanate (PZT) piezoelectric actuator. It could be shown that the measurement data obtained were in good agreement with reference measurements taken with a high-resolution Raman microscope. Furthermore, the suitability of the measurements for classification using machine learning algorithms was also demonstrated. The developed Laser-HSI could be used in the future for complex quality control or sorting tasks where conventional HSI systems fail. Full article
(This article belongs to the Special Issue The Future of Hyperspectral Imaging)
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop