Next Issue
Volume 4, January
Previous Issue
Volume 3, September
 
 

J. Imaging, Volume 3, Issue 4 (December 2017) – 28 articles

Cover Story (view full-size image): This paper proposes a novel method to tackle content-based image retrieval (CBIR) task using both texture and color features. The main motivation is to represent and characterize an image by a set of local descriptors extracted from characteristic points (i.e., keypoints) within the image. Then, the dissimilarity measure is calculated based on the geometric distance between the topological feature spaces (i.e., manifolds) formed by the sets of local descriptors generated from each image of the database. In this work, we propose to extract and exploit the local extrema pixels (i.e., local maximum and local minimum pixels in terms of intensity) as our feature points. We then construct the local extrema-based descriptor (LED) for each keypoint by integrating all color, spatial as well as gradient information captured by its nearest local extrema. As a result, each image is encoded by an LED feature point [...] Read more.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
7910 KiB  
Article
Characterization of Crystallographic Structures Using Bragg-Edge Neutron Imaging at the Spallation Neutron Source
by Gian Song, Jiao Y. Y. Lin, Jean C. Bilheux, Qingge Xie, Louis J. Santodonato, Jamie J. Molaison, Harley D. Skorpenske, Antonio M. Dos Santos, Chris A. Tulk, Ke An, Alexandru D. Stoica, Michael M. Kirka, Ryan R. Dehoff, Anton S. Tremsin, Jeffrey Bunn, Lindsay M. Sochalski-Kolbus and Hassina Z. Bilheux
J. Imaging 2017, 3(4), 65; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040065 - 20 Dec 2017
Cited by 29 | Viewed by 8271
Abstract
Over the past decade, wavelength-dependent neutron radiography, also known as Bragg-edge imaging, has been employed as a non-destructive bulk characterization method due to its sensitivity to coherent elastic neutron scattering that is associated with crystalline structures. Several analysis approaches have been developed to [...] Read more.
Over the past decade, wavelength-dependent neutron radiography, also known as Bragg-edge imaging, has been employed as a non-destructive bulk characterization method due to its sensitivity to coherent elastic neutron scattering that is associated with crystalline structures. Several analysis approaches have been developed to quantitatively determine crystalline orientation, lattice strain, and phase distribution. In this study, we report a systematic investigation of the crystal structures of metallic materials (such as selected textureless powder samples and additively manufactured (AM) Inconel 718 samples), using Bragg-edge imaging at the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source (SNS). Firstly, we have implemented a phenomenological Gaussian-based fitting in a Python-based computer called iBeatles. Secondly, we have developed a model-based approach to analyze Bragg-edge transmission spectra, which allows quantitative determination of the crystallographic attributes. Moreover, neutron diffraction measurements were carried out to validate the Bragg-edge analytical methods. These results demonstrate that the microstructural complexity (in this case, texture) plays a key role in determining the crystallographic parameters (lattice constant or interplanar spacing), which implies that the Bragg-edge image analysis methods must be carefully selected based on the material structures. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

1482 KiB  
Article
Restoration of Bi-Contrast MRI Data for Intensity Uniformity with Bayesian Coring of Co-Occurrence Statistics
by Stathis Hadjidemetriou, Marios Nikos Psychogios, Paul Lingor, Kajetan Von Eckardstein and Ismini Papageorgiou
J. Imaging 2017, 3(4), 67; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040067 - 15 Dec 2017
Cited by 2 | Viewed by 4515
Abstract
The reconstruction of MRI data assumes a uniform radio-frequency field. However, in practice, the radio-frequency field is inhomogeneous and leads to anatomically inconsequential intensity non-uniformities across an image. An anatomic region can be imaged with multiple contrasts reconstructed independently and be suffering from [...] Read more.
The reconstruction of MRI data assumes a uniform radio-frequency field. However, in practice, the radio-frequency field is inhomogeneous and leads to anatomically inconsequential intensity non-uniformities across an image. An anatomic region can be imaged with multiple contrasts reconstructed independently and be suffering from different non-uniformities. These artifacts can complicate the further automated analysis of the images. A method is presented for the joint intensity uniformity restoration of two such images. The effect of the intensity distortion on the auto-co-occurrence statistics of each image as well as on the joint-co-occurrence statistics of the two images is modeled and used for their non-stationary restoration followed by their back-projection to the images. Several constraints that ensure a stable restoration are also imposed. Moreover, the method considers the inevitable differences between the signal regions of the two images. The method has been evaluated extensively with BrainWeb phantom brain data as well as with brain anatomic data from the Human Connectome Project (HCP) and with data of Parkinson’s disease patients. The performance of the proposed method has been compared with that of the N4ITK tool. The proposed method increases tissues contrast at least 4 . 62 times more than the N4ITK tool for the BrainWeb images. The dynamic range with the N4ITK method for the same images is increased by up to +29.77%, whereas, for the proposed method, it has a corresponding limited decrease of - 1 . 15 % , as expected. The validation has demonstrated the accuracy and stability of the proposed method and hence its ability to reduce the requirements for additional calibration scans. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

3128 KiB  
Article
Deep Learning vs. Conventional Machine Learning: Pilot Study of WMH Segmentation in Brain MRI with Absence or Mild Vascular Pathology
by Muhammad Febrian Rachmadi, Maria Del C. Valdés-Hernández, Maria Leonora Fatimah Agan and Taku Komura
J. Imaging 2017, 3(4), 66; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040066 - 14 Dec 2017
Cited by 18 | Viewed by 9509
Abstract
In the wake of the use of deep learning algorithms in medical image analysis, we compared performance of deep learning algorithms, namely the deep Boltzmann machine (DBM), convolutional encoder network (CEN) and patch-wise convolutional neural network (patch-CNN), with two conventional machine learning schemes: [...] Read more.
In the wake of the use of deep learning algorithms in medical image analysis, we compared performance of deep learning algorithms, namely the deep Boltzmann machine (DBM), convolutional encoder network (CEN) and patch-wise convolutional neural network (patch-CNN), with two conventional machine learning schemes: Support vector machine (SVM) and random forest (RF), for white matter hyperintensities (WMH) segmentation on brain MRI with mild or no vascular pathology. We also compared all these approaches with a method in the Lesion Segmentation Tool public toolbox named lesion growth algorithm (LGA). We used a dataset comprised of 60 MRI data from 20 subjects in the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database, each scanned once every year during three consecutive years. Spatial agreement score, receiver operating characteristic and precision-recall performance curves, volume disagreement score, agreement with intra-/inter-observer reliability measurements and visual evaluation were used to find the best configuration of each learning algorithm for WMH segmentation. By using optimum threshold values for the probabilistic output from each algorithm to produce binary masks of WMH, we found that SVM and RF produced good results for medium to very large WMH burden but deep learning algorithms performed generally better than conventional ones in most evaluations. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

1896 KiB  
Review
Small Angle Scattering in Neutron Imaging—A Review
by Markus Strobl, Ralph P. Harti, Christian Gruenzweig, Robin Woracek and Jeroen Plomp
J. Imaging 2017, 3(4), 64; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040064 - 13 Dec 2017
Cited by 27 | Viewed by 9401
Abstract
Conventional neutron imaging utilizes the beam attenuation caused by scattering and absorption through the materials constituting an object in order to investigate its macroscopic inner structure. Small angle scattering has basically no impact on such images under the geometrical conditions applied. Nevertheless, in [...] Read more.
Conventional neutron imaging utilizes the beam attenuation caused by scattering and absorption through the materials constituting an object in order to investigate its macroscopic inner structure. Small angle scattering has basically no impact on such images under the geometrical conditions applied. Nevertheless, in recent years different experimental methods have been developed in neutron imaging, which enable to not only generate contrast based on neutrons scattered to very small angles, but to map and quantify small angle scattering with the spatial resolution of neutron imaging. This enables neutron imaging to access length scales which are not directly resolved in real space and to investigate bulk structures and processes spanning multiple length scales from centimeters to tens of nanometers. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

1337 KiB  
Article
Mereotopological Correction of Segmentation Errors in Histological Imaging
by David A. Randell, Antony Galton, Shereen Fouad, Hisham Mehanna and Gabriel Landini
J. Imaging 2017, 3(4), 63; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040063 - 12 Dec 2017
Cited by 14 | Viewed by 4654
Abstract
In this paper we describe mereotopological methods to programmatically correct image segmentation errors, in particular those that fail to fulfil expected spatial relations in digitised histological scenes. The proposed approach exploits a spatial logic called discrete mereotopology to integrate a number of qualitative [...] Read more.
In this paper we describe mereotopological methods to programmatically correct image segmentation errors, in particular those that fail to fulfil expected spatial relations in digitised histological scenes. The proposed approach exploits a spatial logic called discrete mereotopology to integrate a number of qualitative spatial reasoning and constraint satisfaction methods into imaging procedures. Eight mereotopological relations defined on binary region pairs are represented as nodes in a set of 20 directed graphs, where the node-to-node graph edges encode the possible transitions between the spatial relations after set-theoretic and discrete topological operations on the regions are applied. The graphs allow one to identify sequences of operations that applied to regions of a given relation, and enables one to resegment an image that fails to conform to a valid histological model into one that does. Examples of the methods are presented using images of H&E-stained human carcinoma cell line cultures. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Graphical abstract

25492 KiB  
Article
DocCreator: A New Software for Creating Synthetic Ground-Truthed Document Images
by Nicholas Journet, Muriel Visani, Boris Mansencal, Kieu Van-Cuong and Antoine Billy
J. Imaging 2017, 3(4), 62; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040062 - 11 Dec 2017
Cited by 41 | Viewed by 9270
Abstract
Most digital libraries that provide user-friendly interfaces, enabling quick and intuitive access to their resources, are based on Document Image Analysis and Recognition (DIAR) methods. Such DIAR methods need ground-truthed document images to be evaluated/compared and, in some cases, trained. Especially with the [...] Read more.
Most digital libraries that provide user-friendly interfaces, enabling quick and intuitive access to their resources, are based on Document Image Analysis and Recognition (DIAR) methods. Such DIAR methods need ground-truthed document images to be evaluated/compared and, in some cases, trained. Especially with the advent of deep learning-based approaches, the required size of annotated document datasets seems to be ever-growing. Manually annotating real documents has many drawbacks, which often leads to small reliably annotated datasets. In order to circumvent those drawbacks and enable the generation of massive ground-truthed data with high variability, we present DocCreator, a multi-platform and open-source software able to create many synthetic image documents with controlled ground truth. DocCreator has been used in various experiments, showing the interest of using such synthetic images to enrich the training stage of DIAR tools. Full article
(This article belongs to the Special Issue Document Image Processing)
Show Figures

Figure 1

6923 KiB  
Article
Epithelium and Stroma Identification in Histopathological Images Using Unsupervised and Semi-Supervised Superpixel-Based Segmentation
by Shereen Fouad, David Randell, Antony Galton, Hisham Mehanna and Gabriel Landini
J. Imaging 2017, 3(4), 61; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040061 - 11 Dec 2017
Cited by 7 | Viewed by 5649
Abstract
We present superpixel-based segmentation frameworks for unsupervised and semi-supervised epithelium-stroma identification in histopathological images or oropharyngeal tissue micro arrays. A superpixel segmentation algorithm is initially used to split-up the image into binary regions (superpixels) and their colour features are extracted and fed into [...] Read more.
We present superpixel-based segmentation frameworks for unsupervised and semi-supervised epithelium-stroma identification in histopathological images or oropharyngeal tissue micro arrays. A superpixel segmentation algorithm is initially used to split-up the image into binary regions (superpixels) and their colour features are extracted and fed into several base clustering algorithms with various parameter initializations. Two Consensus Clustering (CC) formulations are then used: the Evidence Accumulation Clustering (EAC) and the voting-based consensus function. These combine the base clustering outcomes to obtain a more robust detection of tissue compartments than the base clustering methods on their own. For the voting-based function, a technique is introduced to generate consistent labellings across the base clustering results. The obtained CC result is then utilized to build a self-training Semi-Supervised Classification (SSC) model. Unlike supervised segmentations, which rely on large number of labelled training images, our SSC approach performs a quality segmentation while relying on few labelled samples. Experiments conducted on forty-five hand-annotated images of oropharyngeal cancer tissue microarrays show that (a) the CC algorithm generates more accurate and stable results than individual clustering algorithms; (b) the clustering performance of the voting-based function outperforms the existing EAC; and (c) the proposed SSC algorithm outperforms the supervised methods, which is trained with only a few labelled instances. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Graphical abstract

2659 KiB  
Article
Performance of the Commercial PP/ZnS:Cu and PP/ZnS:Ag Scintillation Screens for Fast Neutron Imaging
by Malgorzata G. Makowska, Bernhard Walfort, Albert Zeller, Christian Grünzweig and Thomas Bücherl
J. Imaging 2017, 3(4), 60; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040060 - 10 Dec 2017
Cited by 20 | Viewed by 6948
Abstract
Fast neutron imaging has a great potential as a nondestructive technique for testing large objects. The main factor limiting applications of this technique is detection technology, offering relatively poor spatial resolution of images and low detection efficiency, which results in very long exposure [...] Read more.
Fast neutron imaging has a great potential as a nondestructive technique for testing large objects. The main factor limiting applications of this technique is detection technology, offering relatively poor spatial resolution of images and low detection efficiency, which results in very long exposure times. Therefore, research on development of scintillators for fast neutron imaging is of high importance. A comparison of the light output, gamma radiation sensitivity and spatial resolution of commercially available scintillator screens composed of PP/ZnS:Cu and PP/ZnS:Ag of different thicknesses are presented. The scintillators were provided by RC Tritec AG company and the test performed at the NECTAR facility located at the FRM II nuclear research reactor. It was shown that light output increases and the spatial resolution decreases with the scintillator thickness. Both compositions of the scintillating material provide similar light output, while the gamma sensitivity of PP/ZnS:Cu is significantly higher as compared to PP/ZnS:Ag-based scintillators. Moreover, we report which factors should be considered when choosing a scintillator and what are the limitations of the investigated types of scintillators. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

7043 KiB  
Article
Preliminary Results of Clover and Grass Coverage and Total Dry Matter Estimation in Clover-Grass Crops Using Image Analysis
by Anders K. Mortensen, Henrik Karstoft, Karen Søegaard, René Gislum and Rasmus N. Jørgensen
J. Imaging 2017, 3(4), 59; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040059 - 06 Dec 2017
Cited by 8 | Viewed by 6830
Abstract
The clover-grass ratio is an important factor in composing feed ratios for livestock. Cameras in the field allow the user to estimate the clover-grass ratio using image analysis; however, current methods assume the total dry matter is known. This paper presents the preliminary [...] Read more.
The clover-grass ratio is an important factor in composing feed ratios for livestock. Cameras in the field allow the user to estimate the clover-grass ratio using image analysis; however, current methods assume the total dry matter is known. This paper presents the preliminary results of an image analysis method for non-destructively estimating the total dry matter of clover-grass. The presented method includes three steps: (1) classification of image illumination using a histogram of the difference in excess green and excess red; (2) segmentation of clover and grass using edge detection and morphology; and (3) estimation of total dry matter using grass coverage derived from the segmentation and climate parameters. The method was developed and evaluated on images captured in a clover-grass plot experiment during the spring growing season. The preliminary results are promising and show a high correlation between the image-based total dry matter estimate and the harvested dry matter ( R 2 = 0.93 ) with an RMSE of 210 kg ha 1 . Full article
(This article belongs to the Special Issue Remote and Proximal Sensing Applications in Agriculture)
Show Figures

Figure 1

1515 KiB  
Article
Neutron Imaging of Laser Melted SS316 Test Objects with Spatially Resolved Small Angle Neutron Scattering
by Adam J. Brooks, Gerald L. Knapp, Jumao Yuan, Caroline G. Lowery, Max Pan, Bridget E. Cadigan, Shengmin Guo, Daniel S. Hussey and Leslie G. Butler
J. Imaging 2017, 3(4), 58; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040058 - 05 Dec 2017
Cited by 10 | Viewed by 5198
Abstract
A novel neutron far field interferometer is explored for sub-micron porosity detection in laser sintered stainless steel alloy 316 (SS316) test objects. The results shown are images and volumes of the first quantitative neutron dark-field tomography at various autocorrelation lengths, ξ . In [...] Read more.
A novel neutron far field interferometer is explored for sub-micron porosity detection in laser sintered stainless steel alloy 316 (SS316) test objects. The results shown are images and volumes of the first quantitative neutron dark-field tomography at various autocorrelation lengths, ξ . In this preliminary work, the beam defining slits were adjusted to an uncalibrated opening of 0.5 mm horizontal and 5 cm vertical; the images are blurred along the vertical direction. In spite of the blurred attenuation images, the dark-field images reveal structural information at the micron-scale. The topics explored include: the accessible size range of defects, potentially 338 nm to 4.5 μ m, that can be imaged with the small angle scattering images; the spatial resolution of the attenuation image; the maximum sample dimensions compatible with interferometry optics and neutron attenuation; the procedure for reduction of the raw interferogram images into attenuation, differential phase contrast, and small angle scattering (dark-field) images; and the role of neutron far field interferometry in additive manufacturing to assess sub-micron porosity. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

3843 KiB  
Article
Olive Plantation Mapping on a Sub-Tree Scale with Object-Based Image Analysis of Multispectral UAV Data; Operational Potential in Tree Stress Monitoring
by Christos Karydas, Sandra Gewehr, Miltiadis Iatrou, George Iatrou and Spiros Mourelatos
J. Imaging 2017, 3(4), 57; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040057 - 04 Dec 2017
Cited by 18 | Viewed by 5687
Abstract
The objective of this study was to develop a methodology for mapping olive plantations on a sub-tree scale. For this purpose, multispectral imagery of an almost 60-ha plantation in Greece was acquired with an Unmanned Aerial Vehicle. Objects smaller than the tree crown [...] Read more.
The objective of this study was to develop a methodology for mapping olive plantations on a sub-tree scale. For this purpose, multispectral imagery of an almost 60-ha plantation in Greece was acquired with an Unmanned Aerial Vehicle. Objects smaller than the tree crown were produced with image segmentation. Three image features were indicated as optimum for discriminating olive trees from other objects in the plantation, in a rule-based classification algorithm. After limited manual corrections, the final output was validated by an overall accuracy of 93%. The overall processing chain can be considered as suitable for operational olive tree monitoring for potential stresses. Full article
(This article belongs to the Special Issue Remote and Proximal Sensing Applications in Agriculture)
Show Figures

Figure 1

2167 KiB  
Article
Rapid Interactive and Intuitive Segmentation of 3D Medical Images Using Radial Basis Function Interpolation
by Tanja Kurzendorfer, Peter Fischer, Negar Mirshahzadeh, Thomas Pohl, Alexander Brost, Stefan Steidl and Andreas Maier
J. Imaging 2017, 3(4), 56; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040056 - 30 Nov 2017
Cited by 4 | Viewed by 5431
Abstract
Segmentation is one of the most important parts of medical image analysis. Manual segmentation is very cumbersome, time-consuming, and prone to inter-observer variability. Fully automatic segmentation approaches require a large amount of labeled training data and may fail in difficult or abnormal cases. [...] Read more.
Segmentation is one of the most important parts of medical image analysis. Manual segmentation is very cumbersome, time-consuming, and prone to inter-observer variability. Fully automatic segmentation approaches require a large amount of labeled training data and may fail in difficult or abnormal cases. In this work, we propose a new method for 2D segmentation of individual slices and 3D interpolation of the segmented slices. The Smart Brush functionality quickly segments the region of interest in a few 2D slices. Given these annotated slices, our adapted formulation of Hermite radial basis functions reconstructs the 3D surface. Effective interactions with less number of equations accelerate the performance and, therefore, a real-time and an intuitive, interactive segmentation of 3D objects can be supported effectively. The proposed method is evaluated on 12 clinical 3D magnetic resonance imaging data sets and are compared to gold standard annotations of the left ventricle from a clinical expert. The automatic evaluation of the 2D Smart Brush resulted in an average Dice coefficient of 0.88 ± 0.09 for the individual slices. For the 3D interpolation using Hermite radial basis functions, an average Dice coefficient of 0.94 ± 0.02 is achieved. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Graphical abstract

2056 KiB  
Article
Modelling of Orthogonal Craniofacial Profiles
by Hang Dai, Nick Pears and Christian Duncan
J. Imaging 2017, 3(4), 55; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040055 - 30 Nov 2017
Cited by 2 | Viewed by 4901
Abstract
We present a fully-automatic image processing pipeline to build a set of 2D morphable models of three craniofacial profiles from orthogonal viewpoints, side view, front view and top view, using a set of 3D head surface images. Subjects in this dataset wear a [...] Read more.
We present a fully-automatic image processing pipeline to build a set of 2D morphable models of three craniofacial profiles from orthogonal viewpoints, side view, front view and top view, using a set of 3D head surface images. Subjects in this dataset wear a close-fitting latex cap to reveal the overall skull shape. Texture-based 3D pose normalization and facial landmarking are applied to extract the profiles from 3D raw scans. Fully-automatic profile annotation, subdivision and registration methods are used to establish dense correspondence among sagittal profiles. The collection of sagittal profiles in dense correspondence are scaled and aligned using Generalised Procrustes Analysis (GPA), before applying principal component analysis to generate a morphable model. Additionally, we propose a new alternative alignment called the Ellipse Centre Nasion (ECN) method. Our model is used in a case study of craniosynostosis intervention outcome evaluation, and the evaluation reveals that the proposed model achieves state-of-the-art results. We make publicly available both the morphable models and the profile dataset used to construct it. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

4676 KiB  
Article
Android-Based Verification System for Banknotes
by Ubaid Ur Rahman, Allah Bux Sargano and Usama Ijaz Bajwa
J. Imaging 2017, 3(4), 54; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040054 - 24 Nov 2017
Cited by 6 | Viewed by 8477
Abstract
With the advancement in imaging technologies for scanning and printing, production of counterfeit banknotes has become cheaper, easier, and more common. The proliferation of counterfeit banknotes causes loss to banks, traders, and individuals involved in financial transactions. Hence, it is inevitably needed that [...] Read more.
With the advancement in imaging technologies for scanning and printing, production of counterfeit banknotes has become cheaper, easier, and more common. The proliferation of counterfeit banknotes causes loss to banks, traders, and individuals involved in financial transactions. Hence, it is inevitably needed that efficient and reliable techniques for detection of counterfeit banknotes should be developed. With the availability of powerful smartphones, it has become possible to perform complex computations and image processing related tasks on these phones. In addition to this, smartphone users have increased greatly and numbers continue to increase. This is a great motivating factor for researchers and developers to propose innovative mobile-based solutions. In this study, a novel technique for verification of Pakistani banknotes is developed, targeting smartphones with android platform. The proposed technique is based on statistical features, and surface roughness of a banknote, representing different properties of the banknote, such as paper material, printing ink, paper quality, and surface roughness. The selection of these features is motivated by the X-ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) analysis of genuine and counterfeit banknotes. In this regard, two important areas of the banknote, i.e., serial number and flag portions were considered since these portions showed the maximum difference between genuine and counterfeit banknote. The analysis confirmed that genuine and counterfeit banknotes are very different in terms of the printing process, the ingredients used in preparation of banknotes, and the quality of the paper. After extracting the discriminative set of features, support vector machine is used for classification. The experimental results confirm the high accuracy of the proposed technique. Full article
Show Figures

Figure 1

2535 KiB  
Article
Alpha Channel Fragile Watermarking for Color Image Integrity Protection
by Barbara Bonafè, Marco Botta, Davide Cavagnino and Victor Pomponiu
J. Imaging 2017, 3(4), 53; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040053 - 23 Nov 2017
Viewed by 4256
Abstract
This paper presents a fragile watermarking algorithm`m for the protection of the integrity of color images with alpha channel. The system is able to identify modified areas with very high probability, even with small color or transparency changes. The main characteristic of the [...] Read more.
This paper presents a fragile watermarking algorithm`m for the protection of the integrity of color images with alpha channel. The system is able to identify modified areas with very high probability, even with small color or transparency changes. The main characteristic of the algorithm is the embedding of the watermark by modifying the alpha channel, leaving the color channels untouched and introducing a very small error with respect to the host image. As a consequence, the resulting watermarked images have a very high peak signal-to-noise ratio. The security of the algorithm is based on a secret key defining the embedding space in which the watermark is inserted by means of the Karhunen–Loève transform (KLT) and a genetic algorithm (GA). Its high sensitivity to modifications is shown, proving the security of the whole system. Full article
Show Figures

Figure 1

2518 KiB  
Review
Neutron Imaging Facilities in a Global Context
by Eberhard H. Lehmann
J. Imaging 2017, 3(4), 52; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040052 - 21 Nov 2017
Cited by 26 | Viewed by 7435
Abstract
Neutron Imaging (NI) has been developed in the last decades from a film-based inspection method for non-destructive observations towards a powerful research tool with many new and competitive methods. The most important technical step forward has been the introduction and optimization of digital [...] Read more.
Neutron Imaging (NI) has been developed in the last decades from a film-based inspection method for non-destructive observations towards a powerful research tool with many new and competitive methods. The most important technical step forward has been the introduction and optimization of digital imaging detection systems. In this way, direct quantification of the transmission process became possible—the basis for all advanced methods like tomography, phase-contrast imaging and neutron microscopy. Neutron imaging facilities need to be installed at powerful neutron sources (reactors, spallation sources, other accelerator driven systems). High neutron intensity can be used best for either highest spatial, temporal resolution or best image quality. Since the number of such strong sources is decreasing world-wide due to the age of the reactors, the number of NI facilities is limited. There are a few installations with pioneering new concepts and versatile options on the one hand, but also relatively new sources with only limited performance thus far. It will be a challenge to couple the two parts of the community with the aim to install state-of-the-art equipment at the suitable beam ports and develop NI further towards a general research tool. In addition, sources with lower intensity should be equipped with modern installations in order to perform practical work best. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

237 KiB  
Review
Nitrogen (N) Mineral Nutrition and Imaging Sensors for Determining N Status and Requirements of Maize
by Abdelaziz Rhezali and Rachid Lahlali
J. Imaging 2017, 3(4), 51; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040051 - 14 Nov 2017
Cited by 14 | Viewed by 5469
Abstract
Nitrogen (N) is one of the most limiting factors for maize (Zea mays L.) production worldwide. Over-fertilization of N may decrease yields and increase NO3 contamination of water. However, low N fertilization will decrease yields. The objective is to optimize [...] Read more.
Nitrogen (N) is one of the most limiting factors for maize (Zea mays L.) production worldwide. Over-fertilization of N may decrease yields and increase NO3 contamination of water. However, low N fertilization will decrease yields. The objective is to optimize the use of N fertilizers, to excel in yields and preserve the environment. The knowledge of factors affecting the mobility of N in the soil is crucial to determine ways to manage N in the field. Researchers developed several methods to use N efficiently relying on agronomic practices, the use of sensors and the analysis of digital images. These imaging sensors determine N requirements in plants based on changes in Leaf chlorophyll and polyphenolics contents, the Normalized Difference Vegetation Index (NDVI), and the Dark Green Color index (DGCI). Each method revealed limitations and the scope of future research is to draw N recommendations from the Dark Green Color Index (DGCI) technology. Results showed that more effort is needed to develop tools to benefit from DGCI. Full article
(This article belongs to the Special Issue Remote and Proximal Sensing Applications in Agriculture)
1135 KiB  
Article
Sensing Light with LEDs: Performance Evaluation for IoT Applications
by Lorenzo Incipini, Alberto Belli, Lorenzo Palma, Mauro Ballicchia and Paola Pierleoni
J. Imaging 2017, 3(4), 50; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040050 - 12 Nov 2017
Cited by 8 | Viewed by 5516
Abstract
The Internet of Things includes all the technologies allowing the connection of everyday objects to the Internet, in order to gather measurements of physical quantities and interact with the surrounding environments through telecommunication devices with embedded sensing and actuating units. The measurements carried [...] Read more.
The Internet of Things includes all the technologies allowing the connection of everyday objects to the Internet, in order to gather measurements of physical quantities and interact with the surrounding environments through telecommunication devices with embedded sensing and actuating units. The measurements carried out with different LEDs demonstrate the possibility of using these devices both as transmitters and as optical sensors, in addition to their ability to discriminate incident wavelengths, thus making them bi-directional transceivers for Internet of Things (IoT) applications, particularly suitable in the context of Visible Light Communication (VLC). In particular, a methodological tool is provided for selecting the LED sensor for VLC applications. Full article
(This article belongs to the Special Issue Imaging in Internet of Things)
Show Figures

Figure 1

5563 KiB  
Article
Preliminary Tests and Results Concerning Integration of Sentinel-2 and Landsat-8 OLI for Crop Monitoring
by Andrea Lessio, Vanina Fissore and Enrico Borgogno-Mondino
J. Imaging 2017, 3(4), 49; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040049 - 05 Nov 2017
Cited by 25 | Viewed by 5093
Abstract
The Sentinel-2 data by European Space Agency were recently made available for free. Their technical features suggest synergies with Landsat-8 dataset by NASA (National Aeronautics and Space Administration), especially in the agriculture context were observations should be as dense as possible to give [...] Read more.
The Sentinel-2 data by European Space Agency were recently made available for free. Their technical features suggest synergies with Landsat-8 dataset by NASA (National Aeronautics and Space Administration), especially in the agriculture context were observations should be as dense as possible to give a rather complete description of macro-phenology of crops. In this work some preliminary results are presented concerning geometric and spectral consistency of the two compared datasets. Tests were performed specifically focusing on the agriculture-devoted part of Piemonte Region (NW Italy). Geometric consistencies of Sentinel-2 and Landsat-8 datasets were tested “absolutely” (in respect of a selected reference frame) and “relatively” (one in respect of the other) by selecting, respectively, 160 and 100 well distributed check points. Spectral differences affecting at-the-ground reflectance were tested after images calibration performed by dark object subtraction approach. A special focus was on differences affecting derivable NDVI and NDWI spectral indices, being the most widely used in the agriculture remote sensing application context. Results are encouraging and suggest that this approach can successfully enter the ordinary remote sensing-supported precision farming workflow. Full article
(This article belongs to the Special Issue Remote and Proximal Sensing Applications in Agriculture)
Show Figures

Figure 1

17606 KiB  
Article
Exemplar-Based Face Colorization Using Image Morphing
by Johannes Persch, Fabien Pierre and Gabriele Steidl
J. Imaging 2017, 3(4), 48; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040048 - 31 Oct 2017
Cited by 11 | Viewed by 8297
Abstract
Colorization of gray-scale images relies on prior color information. Exemplar-based methods use a color image as source of such information. Then the colors of the source image are transferred to the gray-scale target image. In the literature, this transfer is mainly guided by [...] Read more.
Colorization of gray-scale images relies on prior color information. Exemplar-based methods use a color image as source of such information. Then the colors of the source image are transferred to the gray-scale target image. In the literature, this transfer is mainly guided by texture descriptors. Face images usually contain few texture so that the common approaches frequently fail. In this paper, we propose a new method taking the geometric structure of the images rather their texture into account such that it is more reliable for faces. Our approach is based on image morphing and relies on the YUV color space. First, a correspondence mapping between the luminance Y channel of the color source image and the gray-scale target image is computed. This mapping is based on the time discrete metamorphosis model suggested by Berkels, Effland and Rumpf. We provide a new finite difference approach for the numerical computation of the mapping. Then, the chrominance U,V channels of the source image are transferred via this correspondence map to the target image. A possible postprocessing step by a variational model is developed to further improve the results. To keep the contrast special attention is paid to make the postprocessing unbiased. Our numerical experiments show that our morphing based approach clearly outperforms state-of-the-art methods. Full article
(This article belongs to the Special Issue Color Image Processing)
Show Figures

Figure 1

6086 KiB  
Article
Detection and Classification of Land Crude Oil Spills Using Color Segmentation and Texture Analysis
by O’tega Ejofodomi and Godswill Ofualagba
J. Imaging 2017, 3(4), 47; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040047 - 19 Oct 2017
Cited by 2 | Viewed by 6940
Abstract
Crude oil spills have negative consequences on the economy, environment, health and society in which they occur, and the severity of the consequences depends on how quickly these spills are detected once they begin. Several methods have been employed for spill detection, including [...] Read more.
Crude oil spills have negative consequences on the economy, environment, health and society in which they occur, and the severity of the consequences depends on how quickly these spills are detected once they begin. Several methods have been employed for spill detection, including real time remote surveillance by flying aircrafts with surveillance teams. Other methods employ various sensors, including visible sensors. This paper presents an algorithm to automatically detect the presence of crude oil spills in images acquired using visible light sensors. Images of crude oil spills used in the development of the algorithm were obtained from the Shell Petroleum Development Company (SPDC) Nigeria website The major steps of the detection algorithm are image preprocessing, crude oil color segmentation, sky elimination segmentation, Region of Interest (ROI) extraction, ROI texture feature extraction, and ROI texture feature analysis and classification. The algorithm was developed using 25 sample images containing crude oil spills and demonstrated a sensitivity of 92% and an FPI of 1.43. The algorithm was further tested on a set of 56 case images and demonstrated a sensitivity of 82% and an FPI of 0.66. This algorithm can be incorporated into spill detection systems that utilize visible sensors for early detection of crude oil spills. Full article
Show Figures

Figure 1

14461 KiB  
Article
Computationally Efficient Robust Color Image Watermarking Using Fast Walsh Hadamard Transform
by Suja Kalarikkal Pullayikodi, Naser Tarhuni, Afaq Ahmed and Fahad Bait Shiginah
J. Imaging 2017, 3(4), 46; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040046 - 13 Oct 2017
Cited by 11 | Viewed by 5202
Abstract
Watermark is the copy deterrence mechanism used in the multimedia signal that is to be protected from hacking and piracy such a way that it can later be extracted from the watermarked signal by the decoder. Watermarking can be used in various applications [...] Read more.
Watermark is the copy deterrence mechanism used in the multimedia signal that is to be protected from hacking and piracy such a way that it can later be extracted from the watermarked signal by the decoder. Watermarking can be used in various applications such as authentication, video indexing, copyright protection and access control. In this paper a new CDMA (Code Division Multiple Access) based robust watermarking algorithm using customized 8 × 8 Walsh Hadamard Transform, is proposed for the color images and detailed performance and robustness analysis have been performed. The paper studies in detail the effect of spreading code length, number of spreading codes and type of spreading codes on the performance of the watermarking system. Compared to the existing techniques the proposed scheme is computationally more efficient and consumes much less time for execution. Furthermore, the proposed scheme is robust and survives most of the common signal processing and geometric attacks. Full article
Show Figures

Figure 1

6506 KiB  
Article
The Accuracy of 3D Optical Reconstruction and Additive Manufacturing Processes in Reproducing Detailed Subject-Specific Anatomy
by Paolo Ferraiuoli, Jonathan C. Taylor, Emily Martin, John W. Fenner and Andrew J. Narracott
J. Imaging 2017, 3(4), 45; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040045 - 12 Oct 2017
Cited by 12 | Viewed by 8695
Abstract
3D reconstruction and 3D printing of subject-specific anatomy is a promising technology for supporting clinicians in the visualisation of disease progression and planning for surgical intervention. In this context, the 3D model is typically obtained from segmentation of magnetic resonance imaging (MRI), computed [...] Read more.
3D reconstruction and 3D printing of subject-specific anatomy is a promising technology for supporting clinicians in the visualisation of disease progression and planning for surgical intervention. In this context, the 3D model is typically obtained from segmentation of magnetic resonance imaging (MRI), computed tomography (CT) or echocardiography images. Although these modalities allow imaging of the tissues in vivo, assessment of quality of the reconstruction is limited by the lack of a reference geometry as the subject-specific anatomy is unknown prior to image acquisition. In this work, an optical method based on 3D digital image correlation (3D-DIC) techniques is used to reconstruct the shape of the surface of an ex vivo porcine heart. This technique requires two digital charge-coupled device (CCD) cameras to provide full-field shape measurements and to generate a standard tessellation language (STL) file of the sample surface. The aim of this work was to quantify the error of 3D-DIC shape measurements using the additive manufacturing process. The limitations of 3D printed object resolution, the discrepancy in reconstruction of the surface of cardiac soft tissue and a 3D printed model of the same surface were evaluated. The results obtained demonstrated the ability of the 3D-DIC technique to reconstruct localised and detailed features on the cardiac surface with sub-millimeter accuracy. Full article
(This article belongs to the Special Issue Three-Dimensional Printing and Imaging)
Show Figures

Figure 1

1554 KiB  
Article
Baseline Fusion for Image and Pattern Recognition—What Not to Do (and How to Do Better)
by Ognjen Arandjelović
J. Imaging 2017, 3(4), 44; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040044 - 11 Oct 2017
Cited by 3 | Viewed by 4161
Abstract
The ever-increasing demand for a reliable inference capable of handling unpredictable challenges of practical application in the real world has made research on information fusion of major importance; indeed, this challenge is pervasive in a whole range of image understanding tasks. In the [...] Read more.
The ever-increasing demand for a reliable inference capable of handling unpredictable challenges of practical application in the real world has made research on information fusion of major importance; indeed, this challenge is pervasive in a whole range of image understanding tasks. In the development of the most common type—score-level fusion algorithms—it is virtually universally desirable to have as a reference starting point a simple and universally sound baseline benchmark which newly developed approaches can be compared to. One of the most pervasively used methods is that of weighted linear fusion. It has cemented itself as the default off-the-shelf baseline owing to its simplicity of implementation, interpretability, and surprisingly competitive performance across a widest range of application domains and information source types. In this paper I argue that despite this track record, weighted linear fusion is not a good baseline on the grounds that there is an equally simple and interpretable alternative—namely quadratic mean-based fusion—which is theoretically more principled and which is more successful in practice. I argue the former from first principles and demonstrate the latter using a series of experiments on a diverse set of fusion problems: classification using synthetically generated data, computer vision-based object recognition, arrhythmia detection, and fatality prediction in motor vehicle accidents. On all of the aforementioned problems and in all instances, the proposed fusion approach exhibits superior performance over linear fusion, often increasing class separation by several orders of magnitude. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Show Figures

Figure 1

9074 KiB  
Article
Color Texture Image Retrieval Based on Local Extrema Features and Riemannian Distance
by Minh-Tan Pham, Grégoire Mercier and Lionel Bombrun
J. Imaging 2017, 3(4), 43; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040043 - 10 Oct 2017
Cited by 22 | Viewed by 6001
Abstract
A novel efficient method for content-based image retrieval (CBIR) is developed in this paper using both texture and color features. Our motivation is to represent and characterize an input image by a set of local descriptors extracted from characteristic points (i.e., keypoints) within [...] Read more.
A novel efficient method for content-based image retrieval (CBIR) is developed in this paper using both texture and color features. Our motivation is to represent and characterize an input image by a set of local descriptors extracted from characteristic points (i.e., keypoints) within the image. Then, dissimilarity measure between images is calculated based on the geometric distance between the topological feature spaces (i.e., manifolds) formed by the sets of local descriptors generated from each image of the database. In this work, we propose to extract and use the local extrema pixels as our feature points. Then, the so-called local extrema-based descriptor (LED) is generated for each keypoint by integrating all color, spatial as well as gradient information captured by its nearest local extrema. Hence, each image is encoded by an LED feature point cloud and Riemannian distances between these point clouds enable us to tackle CBIR. Experiments performed on several color texture databases including Vistex, STex, color Brodazt, USPtex and Outex TC-00013 using the proposed approach provide very efficient and competitive results compared to the state-of-the-art methods. Full article
Show Figures

Figure 1

36587 KiB  
Article
Monitoring of the Nirano Mud Volcanoes Regional Natural Reserve (North Italy) using Unmanned Aerial Vehicles and Terrestrial Laser Scanning
by Tommaso Santagata
J. Imaging 2017, 3(4), 42; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040042 - 30 Sep 2017
Cited by 7 | Viewed by 4627
Abstract
In the last years, measurement instruments and techniques for three-dimensional mapping as Terrestrial Laser Scanning (TLS) and photogrammetry from Unmanned Aerial Vehicles (UAV) are being increasingly used to monitor topographic changes on particular geological features such as volcanic areas. In addition, topographic instruments [...] Read more.
In the last years, measurement instruments and techniques for three-dimensional mapping as Terrestrial Laser Scanning (TLS) and photogrammetry from Unmanned Aerial Vehicles (UAV) are being increasingly used to monitor topographic changes on particular geological features such as volcanic areas. In addition, topographic instruments such as Total Station Theodolite (TST) and GPS receivers can be used to obtain precise elevation and coordinate position data measuring fixed points both inside and outside the area interested by volcanic activity. In this study, the integration of these instruments has helped to obtain several types of data to monitor both the variations in heights of extrusive edifices within the mud volcano field of the Nirano Regional Natural Reserve (Northern Italy), as well as to study the mechanism of micro-fracturing and the evolution of mud flows and volcanic cones with very high accuracy by 3D point clouds surface analysis and digitization. The large amount of data detected were also analysed to derive morphological information about mud-cracks and surface roughness. This contribution is focused on methods and analysis performed using measurement instruments as TLS and UAV to study and monitoring the main volcanic complexes of the Nirano Natural Reserve as part of a research project, which also involves other studies addressing gases and acoustic measurements, mineralogical and paleontological analysis, organized by the University of Modena and Reggio Emilia in collaboration with the Municipality of Fiorano Modenese. Full article
(This article belongs to the Special Issue 3D Imaging)
Show Figures

Figure 1

759 KiB  
Article
Towards a Novel Approach for Tumor Volume Quantification
by Amina Kharbach, Benaissa Bellach, Mohammed Rahmoune, Mohammed Rahmoun and Hanane Hadj Kacem
J. Imaging 2017, 3(4), 41; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040041 - 27 Sep 2017
Cited by 1 | Viewed by 3627
Abstract
In medical image processing, evaluating the variations of lesion volume plays a major role in many medical applications. It helps radiologists to follow-up with patients and examine the effects of therapy. Several approaches have been proposed to meet with medical expectations. The present [...] Read more.
In medical image processing, evaluating the variations of lesion volume plays a major role in many medical applications. It helps radiologists to follow-up with patients and examine the effects of therapy. Several approaches have been proposed to meet with medical expectations. The present work comes within this context. We present a new approach based on the local dissimilarity volume (LDV) that is a 3D representation of the local dissimilarity map (LDM). This map presents a useful means to compare two images, offering a localization of information. We proved the effectiveness of this method (LDV) compared to medical techniques used by radiologists. The result of simulations shows that we can quantify lesion volume by using the LDV method, which is an efficient way to calculate and localize the volume variation of anomalies. It allowed a time savings with the compete satisfaction of an expert during the medical treatment. Full article
(This article belongs to the Special Issue Nanoparticles and Medical Imaging for Image Guided Medicine)
Show Figures

Figure 1

12308 KiB  
Review
The Academy Color Encoding System (ACES): A Professional Color-Management Framework for Production, Post-Production and Archival of Still and Motion Pictures
by Walter Arrighetti
J. Imaging 2017, 3(4), 40; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040040 - 21 Sep 2017
Cited by 8 | Viewed by 17927
Abstract
The Academy of Motion Picture Arts and Sciences has been pivotal in the inception, design and later adoption of a vendor-agnostic and open framework for color management, the Academy Color Encoding System (ACES), targeting theatrical, TV and animation features, but also still-photography and [...] Read more.
The Academy of Motion Picture Arts and Sciences has been pivotal in the inception, design and later adoption of a vendor-agnostic and open framework for color management, the Academy Color Encoding System (ACES), targeting theatrical, TV and animation features, but also still-photography and image preservation at large. For this reason, the Academy gathered an interdisciplinary group of scientists, technologists, and creatives, to contribute to it so that it is scientifically sound and technically advantageous in solving practical and interoperability problems in the current film production, postproduction and visual-effects (VFX) ecosystem—all while preserving and future-proofing the cinematographers’ and artists’ creative intent as its main objective. In this paper, a review of ACES’ technical specifications is provided, as well as the current status of the project and a recent use case is given, namely that of the first Italian production embracing an end-to-end ACES pipeline. In addition, new ACES components will be introduced and a discussion started about possible uses for long-time preservation of color imaging in video-content heritage. Full article
(This article belongs to the Special Issue Color Image Processing)
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop