Next Issue
Volume 6, May
Previous Issue
Volume 6, March
 
 

J. Imaging, Volume 6, Issue 4 (April 2020) – 12 articles

Cover Story (view full-size image): An assessment of different imaging artifacts in quantitative neutron transmission imaging of hydrogenous steel specimens is performed. The contribution of scattered neutrons to the total detected intensity is dependent on the sample–detector distance and the scattering behavior of the sample material. It can be reduced by either using the full detector area for imaging or by shielding the unused detector areas from neutrons with nontransparent material. This also reduces the backlight effect in the scintillator. Varying sample–detector distances allow the transmitted intensity to be separated from the scattered intensity by fitting an appropriate distance law. Refraction of neutrons on smooth surfaces can be avoided by exact alignment of the sample parallel to the incoming beam. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
28 pages, 2135 KiB  
Article
Analysing Arbitrary Curves from the Line Hough Transform
by Donald Bailey, Yuan Chang and Steven Le Moan
J. Imaging 2020, 6(4), 26; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6040026 - 23 Apr 2020
Cited by 6 | Viewed by 4106
Abstract
The Hough transform is commonly used for detecting linear features within an image. A line is mapped to a peak within parameter space corresponding to the parameters of the line. By analysing the shape of the peak, or peak locus, within parameter space, [...] Read more.
The Hough transform is commonly used for detecting linear features within an image. A line is mapped to a peak within parameter space corresponding to the parameters of the line. By analysing the shape of the peak, or peak locus, within parameter space, it is possible to also use the line Hough transform to detect or analyse arbitrary (non-parametric) curves. It is shown that there is a one-to-one relationship between the curve in image space, and the peak locus in parameter space, enabling the complete curve to be reconstructed from its peak locus. In this paper, we determine the patterns of the peak locus for closed curves (including circles and ellipses), linear segments, inflection points, and corners. It is demonstrated that the curve shape can be simplified by ignoring parts of the peak locus. One such simplification is to derive the convex hull of shapes directly from the representation within the Hough transform. It is also demonstrated that the parameters of elliptical blobs can be measured directly from the Hough transform. Full article
Show Figures

Figure 1

31 pages, 25180 KiB  
Article
A Robust Tracking-by-Detection Algorithm Using Adaptive Accumulated Frame Differencing and Corner Features
by Nahlah Algethami and Sam Redfern
J. Imaging 2020, 6(4), 25; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6040025 - 21 Apr 2020
Cited by 6 | Viewed by 3935
Abstract
We propose a tracking-by-detection algorithm to track the movements of meeting participants from an overhead camera. An advantage of using overhead cameras is that all objects can typically be seen clearly, with little occlusion; however, detecting people from a wide-angle overhead view also [...] Read more.
We propose a tracking-by-detection algorithm to track the movements of meeting participants from an overhead camera. An advantage of using overhead cameras is that all objects can typically be seen clearly, with little occlusion; however, detecting people from a wide-angle overhead view also poses challenges such as people’s appearance significantly changing due to their position in the wide-angle image, and generally from a lack of strong image features. Our experimental datasets do not include empty meeting rooms, and this means that standard motion based detection techniques (e.g., background subtraction or consecutive frame differencing) struggle since there is no prior knowledge for a background model. Additionally, standard techniques may perform poorly when there is a wide range of movement behaviours (e.g. periods of no movement and periods of fast movement), as is often the case in meetings. Our algorithm uses a novel coarse-to-fine detection and tracking approach, combining motion detection using adaptive accumulated frame differencing (AAFD) with Shi-Tomasi corner detection. We present quantitative and qualitative evaluation which demonstrates the robustness of our method to track people in environments where object features are not clear and have similar colour to the background. We show that our approach achieves excellent performance in terms of the multiple object tracking accuracy (MOTA) metrics, and that it is particularly robust to initialisation differences when compared with baseline and state of the art trackers. Using the Online Tracking Benchmark (OTB) videos we also demonstrate that our tracker is very strong in the presence of background clutter, deformation and illumination variation. Full article
Show Figures

Figure 1

39 pages, 2370 KiB  
Article
Classification of Compressed Remote Sensing Multispectral Images via Convolutional Neural Networks
by Michalis Giannopoulos, Anastasia Aidini, Anastasia Pentari, Konstantina Fotiadou and Panagiotis Tsakalides
J. Imaging 2020, 6(4), 24; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6040024 - 18 Apr 2020
Cited by 4 | Viewed by 3556
Abstract
Multispectral sensors constitute a core Earth observation image technology generating massive high-dimensional observations. To address the communication and storage constraints of remote sensing platforms, lossy data compression becomes necessary, but it unavoidably introduces unwanted artifacts. In this work, we consider the encoding of [...] Read more.
Multispectral sensors constitute a core Earth observation image technology generating massive high-dimensional observations. To address the communication and storage constraints of remote sensing platforms, lossy data compression becomes necessary, but it unavoidably introduces unwanted artifacts. In this work, we consider the encoding of multispectral observations into high-order tensor structures which can naturally capture multi-dimensional dependencies and correlations, and we propose a resource-efficient compression scheme based on quantized low-rank tensor completion. The proposed method is also applicable to the case of missing observations due to environmental conditions, such as cloud cover. To quantify the performance of compression, we consider both typical image quality metrics as well as the impact on state-of-the-art deep learning-based land-cover classification schemes. Experimental analysis on observations from the ESA Sentinel-2 satellite reveals that even minimal compression can have negative effects on classification performance which can be efficiently addressed by our proposed recovery scheme. Full article
(This article belongs to the Special Issue Multispectral Imaging)
Show Figures

Figure 1

18 pages, 3901 KiB  
Article
Neugebauer Models for Color Error Diffusion Halftoning
by Kaiming Wu, Kohei Inoue and Kenji Hara
J. Imaging 2020, 6(4), 23; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6040023 - 17 Apr 2020
Cited by 2 | Viewed by 3359
Abstract
In this paper, we propose a method for halftoning color images based on an error diffusion technique, a color design criterion and Neugebauer models for expressing colors. For a natural extension of the conventional method for grayscale error diffusion to its color version, [...] Read more.
In this paper, we propose a method for halftoning color images based on an error diffusion technique, a color design criterion and Neugebauer models for expressing colors. For a natural extension of the conventional method for grayscale error diffusion to its color version, we first reformulate grayscale error diffusion with a one-dimensional Neugebauer model. Then we increase the dimension of the model to derive a color error diffusion method based on a three-dimensional Neugebauer model in RGB (red, green and blue) color space. Moreover, we propose a sparse Neugebauer model based on a color design criterion, or the minimal brightness variation criterion (MBVC), from which we derive a sparse Neugebauer model-based error diffusion method. Experimental results show that color halftone images produced by the proposed methods preserve the color contents in original continuous-tone images better than that by conventional color error diffusion methods. We also demonstrate that the proposed sparse method reduce halftone noise better than the state-of-the-art method based on MBVC. Full article
Show Figures

Figure 1

10 pages, 2339 KiB  
Article
On the Genesis of Artifacts in Neutron Transmission Imaging of Hydrogenous Steel Specimens
by Beate Pfretzschner, Thomas Schaupp, Andreas Hannemann, Michael Schulz and Axel Griesche
J. Imaging 2020, 6(4), 22; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6040022 - 09 Apr 2020
Cited by 1 | Viewed by 3931
Abstract
Hydrogen-charged supermartensitic steel samples were used to systematically investigate imaging artifacts in neutron radiography. Cadmium stencils were placed around the samples to shield the scintillator from excessive neutron radiation and to investigate the influence of the backlight effect. The contribution of scattered neutrons [...] Read more.
Hydrogen-charged supermartensitic steel samples were used to systematically investigate imaging artifacts in neutron radiography. Cadmium stencils were placed around the samples to shield the scintillator from excessive neutron radiation and to investigate the influence of the backlight effect. The contribution of scattered neutrons to the total detected intensity was investigated by additionally varying the sample-detector distance and applying a functional correlation between distance and intensity. Furthermore, the influence of the surface roughness on the edge effect due to refraction was investigated. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

15 pages, 3885 KiB  
Technical Note
A Fast and Low-Cost Human Body 3D Scanner Using 100 Cameras
by Mojtaba Zeraatkar and Khalil Khalili
J. Imaging 2020, 6(4), 21; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6040021 - 09 Apr 2020
Cited by 16 | Viewed by 5971
Abstract
The human body is one of the most complicated objects to model because of its complex features, non-rigidity, and the time required to take body measurements. Basic technologies available in this field range from small and low-cost scanners that must be moved around [...] Read more.
The human body is one of the most complicated objects to model because of its complex features, non-rigidity, and the time required to take body measurements. Basic technologies available in this field range from small and low-cost scanners that must be moved around the body to large and high-cost scanners that can capture all sides of the body simultaneously. This paper presents an image-based scanning system which employs the structure-from-motion method. The design and development process of the scanner includes its physical structure, electronic components, and the algorithms used for extracting 3D data. In addition to the accuracy, which is one of the main parameters to consider when choosing a 3D scanner, the time and cost of the system are among the most important parameters for evaluating a scanner system in the field of human scanning. Because of the non-static nature of the human body, the scanning time is particularly important. On the other hand, a high-cost system may lead to limited use of such systems. The design developed in this paper, which utilizes 100 cameras, facilitates the acquisition of geometric data in a fraction of a second (0.001 s) and provides the capabilities of large, freestanding scanners at a price akin to that of smaller, mobile ones. Full article
Show Figures

Figure 1

16 pages, 4917 KiB  
Article
Spectrum Correction Using Modeled Panchromatic Image for Pansharpening
by Naoko Tsukamoto, Yoshihiro Sugaya and Shinichiro Omachi
J. Imaging 2020, 6(4), 20; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6040020 - 06 Apr 2020
Cited by 2 | Viewed by 3751
Abstract
Pansharpening is a method applied for the generation of high-spatial-resolution multi-spectral (MS) images using panchromatic (PAN) and multi-spectral images. A common challenge in pansharpening is to reduce the spectral distortion caused by increasing the resolution. In this paper, we propose a method for [...] Read more.
Pansharpening is a method applied for the generation of high-spatial-resolution multi-spectral (MS) images using panchromatic (PAN) and multi-spectral images. A common challenge in pansharpening is to reduce the spectral distortion caused by increasing the resolution. In this paper, we propose a method for reducing the spectral distortion based on the intensity–hue–saturation (IHS) method targeting satellite images. The IHS method improves the resolution of an RGB image by replacing the intensity of the low-resolution RGB image with that of the high-resolution PAN image. The spectral characteristics of the PAN and MS images are different, and this difference may cause spectral distortion in the pansharpened image. Although many solutions for reducing spectral distortion using a modeled spectrum have been proposed, the quality of the outcomes obtained by these approaches depends on the image dataset. In the proposed technique, we model a low-spatial-resolution PAN image according to a relative spectral response graph, and then the corrected intensity is calculated using the model and the observed dataset. Experiments were conducted on three IKONOS datasets, and the results were evaluated using some major quality metrics. This quantitative evaluation demonstrated the stability of the pansharpened images and the effectiveness of the proposed method. Full article
Show Figures

Figure 1

12 pages, 6013 KiB  
Article
Automatic Recognition of Dendritic Solidification Structures: DenMap
by Bogdan Nenchev, Joel Strickland, Karl Tassenberg, Samuel Perry, Simon Gill and Hongbiao Dong
J. Imaging 2020, 6(4), 19; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6040019 - 03 Apr 2020
Cited by 14 | Viewed by 4312
Abstract
Dendrites are the predominant solidification structures in directionally solidified alloys and control the maximum length scale for segregation. The conventional industrial method for identification of dendrite cores and primary dendrite spacing is performed by time-consuming laborious manual measurement. In this work we developed [...] Read more.
Dendrites are the predominant solidification structures in directionally solidified alloys and control the maximum length scale for segregation. The conventional industrial method for identification of dendrite cores and primary dendrite spacing is performed by time-consuming laborious manual measurement. In this work we developed a novel DenMap image processing and pattern recognition algorithm to identify dendritic cores. Systematic row scan with a specially selected template image over an image of interest is applied via a normalised cross-correlation algorithm. The DenMap algorithm locates the exact dendritic core position with a 98% accuracy for a batch of SEM images of typical as-cast CMSX-4® microstructures in under 90 s per image. Such accuracy is achieved due to a sequence of specially selected image pre-processing methods. Coupled with statistical analysis the model has the potential to gather large quantities of structural data accurately and rapidly, allowing for optimisation and quality control of industrial processes to improve mechanical and creep performance of materials. Full article
(This article belongs to the Special Issue Advances in Image Feature Extraction and Selection)
Show Figures

Graphical abstract

21 pages, 6439 KiB  
Article
Explorative Imaging and Its Implementation at the FleX-ray Laboratory
by Sophia Bethany Coban, Felix Lucka, Willem Jan Palenstijn, Denis Van Loo and Kees Joost Batenburg
J. Imaging 2020, 6(4), 18; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6040018 - 02 Apr 2020
Cited by 30 | Viewed by 5779
Abstract
In tomographic imaging, the traditional process consists of an expert and an operator collecting data, the expert working on the reconstructed slices and drawing conclusions. The quality of reconstructions depends heavily on the quality of the collected data, except that, in the traditional [...] Read more.
In tomographic imaging, the traditional process consists of an expert and an operator collecting data, the expert working on the reconstructed slices and drawing conclusions. The quality of reconstructions depends heavily on the quality of the collected data, except that, in the traditional process of imaging, the expert has very little influence over the acquisition parameters, experimental plan or the collected data. It is often the case that the expert has to draw limited conclusions from the reconstructions, or adapt a research question to data available. This method of imaging is static and sequential, and limits the potential of tomography as a research tool. In this paper, we propose a more dynamic process of imaging where experiments are tailored around a sample or the research question; intermediate reconstructions and analysis are available almost instantaneously, and expert has input at any stage of the process (including during acquisition) to improve acquisition or image reconstruction. Through various applications of 2D, 3D and dynamic 3D imaging at the FleX-ray Laboratory, we present the unexpected journey of exploration a research question undergoes, and the surprising benefits it yields. Full article
Show Figures

Figure 1

10 pages, 15248 KiB  
Article
Evaluation of Automatic Facial Wrinkle Detection Algorithms
by Remah Mutasim Elbashir and Moi Hoon Yap
J. Imaging 2020, 6(4), 17; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6040017 - 01 Apr 2020
Cited by 7 | Viewed by 8171
Abstract
Facial wrinkles (considered to be natural features) appear as people get older. Wrinkle detection is an important aspect of applications that depend on facial skin changes, such as face age estimation and soft biometrics. While existing wrinkle detection algorithms focus on forehead horizontal [...] Read more.
Facial wrinkles (considered to be natural features) appear as people get older. Wrinkle detection is an important aspect of applications that depend on facial skin changes, such as face age estimation and soft biometrics. While existing wrinkle detection algorithms focus on forehead horizontal lines, it is necessary to develop new methods to detect all wrinkles (vertical and horizontal) on whole face. Therefore, we evaluated the performance of wrinkle detection algorithms on the whole face and proposed an enhancement technique to improve the performance. More specifically, we used 45 images of the Face Recognition Technology dataset (FERET) and 25 images of the Sudanese dataset. For ground truth annotations, the selected images were manually annotated by the researcher. The experiments showed that the method with enhancement performed better at detecting facial wrinkles when compared to the state-of-the-art methods. When evaluated on FERET, the average Jaccard similarity indices were 56.17%, 31.69% and 15.87% for the enhancement method, Hybrid Hessian Filter and Gabor Filter, respectively. Full article
Show Figures

Figure 1

15 pages, 2858 KiB  
Article
Color Image Complexity versus Over-Segmentation: A Preliminary Study on the Correlation between Complexity Measures and Number of Segments
by Mihai Ivanovici, Radu-Mihai Coliban, Cosmin Hatfaludi and Irina Emilia Nicolae
J. Imaging 2020, 6(4), 16; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6040016 - 30 Mar 2020
Cited by 5 | Viewed by 3730
Abstract
It is said that image segmentation is a very difficult or complex task. First of all, we emphasize the subtle difference between the notions of difficulty and complexity. Then, in this article, we focus on the question of how two widely used color [...] Read more.
It is said that image segmentation is a very difficult or complex task. First of all, we emphasize the subtle difference between the notions of difficulty and complexity. Then, in this article, we focus on the question of how two widely used color image complexity measures correlate with the number of segments resulting in over-segmentation. We study the evolution of both the image complexity measures and number of segments as the image complexity is gradually decreased by means of low-pass filtering. In this way, we tackle the possibility of predicting the difficulty of color image segmentation based on image complexity measures. We analyze the complexity of images from the point of view of color entropy and color fractal dimension and for color fractal images and the Berkeley data set we correlate these two metrics with the segmentation results, more specifically the number of quasi-flat zones and the number of JSEG regions in the resulting segmentation map. We report on our experimental results and draw conclusions. Full article
(This article belongs to the Special Issue Color Image Segmentation )
Show Figures

Figure 1

14 pages, 1535 KiB  
Article
A New Pseudo-Spectral Method Using the Discrete Cosine Transform
by Izumi Ito
J. Imaging 2020, 6(4), 15; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6040015 - 28 Mar 2020
Cited by 4 | Viewed by 3792
Abstract
The pseudo-spectral (PS) method on the basis of the Fourier transform is a numerical method for estimating derivatives. Generally, the discrete Fourier transform (DFT) is used when implementing the PS method. However, when the values on both sides of the sequences differ significantly, [...] Read more.
The pseudo-spectral (PS) method on the basis of the Fourier transform is a numerical method for estimating derivatives. Generally, the discrete Fourier transform (DFT) is used when implementing the PS method. However, when the values on both sides of the sequences differ significantly, oscillatory approximations around both sides appear due to the periodicity resulting from the DFT. To address this problem, we propose a new PS method based on symmetric extension. We mathematically derive the proposed method using the discrete cosine transform (DCT) in the forward transform from the relation between DFT and DCT. DCT allows a sequence to function as a symmetrically extended sequence and estimates derivatives in the transformed domain. The superior performance of the proposed method is demonstrated through image interpolation. Potential applications of the proposed method are numerical simulations using the Fourier based PS method in many fields such as fluid dynamics, meteorology, and geophysics. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop