Next Issue
Volume 6, August
Previous Issue
Volume 6, June
 
 

J. Imaging, Volume 6, Issue 7 (July 2020) – 17 articles

Cover Story (view full-size image): Estimating the number of people in multiple contexts is crucial to deal with many security problems, including the management and distribution of people, for example, a concert or a subway car in which it is good not to crowd certain areas leaving others free, or performing statistical analysis, for example, analyzing protests in which it is important to understand the number of people involved. Density map estimation is another important task. It is important to know the spatial information, that is, to understand how the crowd is distributed within the scene. This kind of information can be useful for monitoring crowds and understanding which areas are more congested. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 14069 KiB  
Article
Edge-Based Color Image Segmentation Using Particle Motion in a Vector Image Field Derived from Local Color Distance Images
by Wutthichai Phornphatcharaphong and Nawapak Eua-Anant
J. Imaging 2020, 6(7), 72; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070072 - 16 Jul 2020
Cited by 9 | Viewed by 3067
Abstract
This paper presents an edge-based color image segmentation approach, derived from the method of particle motion in a vector image field, which could previously be applied only to monochrome images. Rather than using an edge vector field derived from a gradient vector field [...] Read more.
This paper presents an edge-based color image segmentation approach, derived from the method of particle motion in a vector image field, which could previously be applied only to monochrome images. Rather than using an edge vector field derived from a gradient vector field and a normal compressive vector field derived from a Laplacian-gradient vector field, two novel orthogonal vector fields were directly computed from a color image, one parallel and another orthogonal to the edges. These were then used in the model to force a particle to move along the object edges. The normal compressive vector field is created from the collection of the center-to-centroid vectors of local color distance images. The edge vector field is later derived from the normal compressive vector field so as to obtain a vector field analogous to a Hamiltonian gradient vector field. Using the PASCAL Visual Object Classes Challenge 2012 (VOC2012), the Berkeley Segmentation Data Set, and Benchmarks 500 (BSDS500), the benchmark score of the proposed method is provided in comparison to those of the traditional particle motion in a vector image field (PMVIF), Watershed, simple linear iterative clustering (SLIC), K-means, mean shift, and J-value segmentation (JSEG). The proposed method yields better Rand index (RI), global consistency error (GCE), normalized variation of information (NVI), boundary displacement error (BDE), Dice coefficients, faster computation time, and noise resistance. Full article
(This article belongs to the Special Issue Color Image Segmentation )
Show Figures

Figure 1

20 pages, 13246 KiB  
Article
Cross-Depicted Historical Motif Categorization and Retrieval with Deep Learning
by Vinaychandran Pondenkandath, Michele Alberti, Nicole Eichenberger, Rolf Ingold and Marcus Liwicki
J. Imaging 2020, 6(7), 71; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070071 - 15 Jul 2020
Cited by 2 | Viewed by 2972
Abstract
In this paper, we tackle the problem of categorizing and identifying cross-depicted historical motifs using recent deep learning techniques, with aim of developing a content-based image retrieval system. As cross-depiction, we understand the problem that the same object can be represented (depicted) in [...] Read more.
In this paper, we tackle the problem of categorizing and identifying cross-depicted historical motifs using recent deep learning techniques, with aim of developing a content-based image retrieval system. As cross-depiction, we understand the problem that the same object can be represented (depicted) in various ways. The objects of interest in this research are watermarks, which are crucial for dating manuscripts. For watermarks, cross-depiction arises due to two reasons: (i) there are many similar representations of the same motif, and (ii) there are several ways of capturing the watermarks, i.e., as the watermarks are not visible on a scan or photograph, the watermarks are typically retrieved via hand tracing, rubbing, or special photographic techniques. This leads to different representations of the same (or similar) objects, making it hard for pattern recognition methods to recognize the watermarks. While this is a simple problem for human experts, computer vision techniques have problems generalizing from the various depiction possibilities. In this paper, we present a study where we use deep neural networks for categorization of watermarks with varying levels of detail. The macro-averaged F1-score on an imbalanced 12 category classification task is 88.3 %, the multi-labelling performance (Jaccard Index) on a 622 label task is 79.5 %. To analyze the usefulness of an image-based system for assisting humanities scholars in cataloguing manuscripts, we also measure the performance of similarity matching on expert-crafted test sets of varying sizes (50 and 1000 watermark samples). A significant outcome is that all relevant results belonging to the same super-class are found by our system (Mean Average Precision of 100%), despite the cross-depicted nature of the motifs. This result has not been achieved in the literature so far. Full article
(This article belongs to the Special Issue Recent Advances in Historical Document Processing)
Show Figures

Figure 1

27 pages, 7335 KiB  
Article
A New Composite Fractal Function and Its Application in Image Encryption
by Shafali Agarwal
J. Imaging 2020, 6(7), 70; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070070 - 15 Jul 2020
Cited by 12 | Viewed by 3718
Abstract
Fractal’s spatially nonuniform phenomena and chaotic nature highlight the function utilization in fractal cryptographic applications. This paper proposes a new composite fractal function (CFF) that combines two different Mandelbrot set (MS) functions with one control parameter. The CFF simulation results demonstrate that the [...] Read more.
Fractal’s spatially nonuniform phenomena and chaotic nature highlight the function utilization in fractal cryptographic applications. This paper proposes a new composite fractal function (CFF) that combines two different Mandelbrot set (MS) functions with one control parameter. The CFF simulation results demonstrate that the given map has high initial value sensitivity, complex structure, wider chaotic region, and more complicated dynamical behavior. By considering the chaotic properties of a fractal, an image encryption algorithm using a fractal-based pixel permutation and substitution is proposed. The process starts by scrambling the plain image pixel positions using the Henon map so that an intruder fails to obtain the original image even after deducing the standard confusion-diffusion process. The permutation phase uses a Z-scanned random fractal matrix to shuffle the scrambled image pixel. Further, two different fractal sequences of complex numbers are generated using the same function i.e. CFF. The complex sequences are thus modified to a double datatype matrix and used to diffuse the scrambled pixels in a row-wise and column-wise manner, separately. Security and performance analysis results confirm the reliability, high-security level, and robustness of the proposed algorithm against various attacks, including brute-force attack, known/chosen-plaintext attack, differential attack, and occlusion attack. Full article
Show Figures

Figure 1

21 pages, 13701 KiB  
Article
Polyp Segmentation with Fully Convolutional Deep Neural Networks—Extended Evaluation Study
by Yunbo Guo, Jorge Bernal and Bogdan J. Matuszewski
J. Imaging 2020, 6(7), 69; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070069 - 13 Jul 2020
Cited by 41 | Viewed by 8632
Abstract
Analysis of colonoscopy images plays a significant role in early detection of colorectal cancer. Automated tissue segmentation can be useful for two of the most relevant clinical target applications—lesion detection and classification, thereby providing important means to make both processes more accurate and [...] Read more.
Analysis of colonoscopy images plays a significant role in early detection of colorectal cancer. Automated tissue segmentation can be useful for two of the most relevant clinical target applications—lesion detection and classification, thereby providing important means to make both processes more accurate and robust. To automate video colonoscopy analysis, computer vision and machine learning methods have been utilized and shown to enhance polyp detectability and segmentation objectivity. This paper describes a polyp segmentation algorithm, developed based on fully convolutional network models, that was originally developed for the Endoscopic Vision Gastrointestinal Image Analysis (GIANA) polyp segmentation challenges. The key contribution of the paper is an extended evaluation of the proposed architecture, by comparing it against established image segmentation benchmarks utilizing several metrics with cross-validation on the GIANA training dataset. Different experiments are described, including examination of various network configurations, values of design parameters, data augmentation approaches, and polyp characteristics. The reported results demonstrate the significance of the data augmentation, and careful selection of the method’s design parameters. The proposed method delivers state-of-the-art results with near real-time performance. The described solution was instrumental in securing the top spot for the polyp segmentation sub-challenge at the 2017 GIANA challenge and second place for the standard image resolution segmentation task at the 2018 GIANA challenge. Full article
(This article belongs to the Special Issue MIUA2019)
Show Figures

Figure 1

14 pages, 2616 KiB  
Article
A Discriminative Long Short Term Memory Network with Metric Learning Applied to Multispectral Time Series Classification
by Merve Bozo, Erchan Aptoula and Zehra Çataltepe
J. Imaging 2020, 6(7), 68; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070068 - 12 Jul 2020
Cited by 8 | Viewed by 3176
Abstract
In this article, we propose an end-to-end deep network for the classification of multi-spectral time series and apply them to crop type mapping. Long short-term memory networks (LSTMs) are well established in this regard, thanks to their capacity to capture both long and [...] Read more.
In this article, we propose an end-to-end deep network for the classification of multi-spectral time series and apply them to crop type mapping. Long short-term memory networks (LSTMs) are well established in this regard, thanks to their capacity to capture both long and short term temporal dependencies. Nevertheless, dealing with high intra-class variance and inter-class similarity still remain significant challenges. To address these issues, we propose a straightforward approach where LSTMs are combined with metric learning. The proposed architecture accommodates three distinct branches with shared weights, each containing a LSTM module, that are merged through a triplet loss. It thus not only minimizes classification error, but enforces the sub-networks to produce more discriminative deep features. It is validated via Breizhcrops, a very recently introduced and challenging time series dataset for crop type mapping. Full article
(This article belongs to the Special Issue Color Image Segmentation )
Show Figures

Figure 1

19 pages, 8385 KiB  
Article
Identification of QR Code Perspective Distortion Based on Edge Directions and Edge Projections Analysis
by Ladislav Karrach, Elena Pivarčiová and Pavol Božek
J. Imaging 2020, 6(7), 67; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070067 - 10 Jul 2020
Cited by 25 | Viewed by 11931
Abstract
QR (quick response) Codes are one of the most popular types of two-dimensional (2D) matrix codes currently used in a wide variety of fields. Two-dimensional matrix codes, compared to 1D bar codes, can encode significantly more data in the same area. We have [...] Read more.
QR (quick response) Codes are one of the most popular types of two-dimensional (2D) matrix codes currently used in a wide variety of fields. Two-dimensional matrix codes, compared to 1D bar codes, can encode significantly more data in the same area. We have compared algorithms capable of localizing multiple QR Codes in an image using typical finder patterns, which are present in three corners of a QR Code. Finally, we present a novel approach to identify perspective distortion by analyzing the direction of horizontal and vertical edges and by maximizing the standard deviation of horizontal and vertical projections of these edges. This algorithm is computationally efficient, works well for low-resolution images, and is also suited to real-time processing. Full article
Show Figures

Figure 1

16 pages, 7783 KiB  
Article
Investigating Optimal Time Step Intervals of Imaging for Data Quality through a Novel Fully-Automated Cell Tracking Approach
by Feng Wei Yang, Lea Tomášová, Zeno v. Guttenberg, Ke Chen and Anotida Madzvamuse
J. Imaging 2020, 6(7), 66; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070066 - 07 Jul 2020
Cited by 2 | Viewed by 2982
Abstract
Computer-based fully-automated cell tracking is becoming increasingly important in cell biology, since it provides unrivalled capacity and efficiency for the analysis of large datasets. However, automatic cell tracking’s lack of superior pattern recognition and error-handling capability compared to its human manual tracking counterpart [...] Read more.
Computer-based fully-automated cell tracking is becoming increasingly important in cell biology, since it provides unrivalled capacity and efficiency for the analysis of large datasets. However, automatic cell tracking’s lack of superior pattern recognition and error-handling capability compared to its human manual tracking counterpart inspired decades-long research. Enormous efforts have been made in developing advanced cell tracking packages and software algorithms. Typical research in this field focuses on dealing with existing data and finding a best solution. Here, we investigate a novel approach where the quality of data acquisition could help improve the accuracy of cell tracking algorithms and vice-versa. Generally speaking, when tracking cell movement, the more frequent the images are taken, the more accurate cells are tracked and, yet, issues such as damage to cells due to light intensity, overheating in equipment, as well as the size of the data prevent a constant data streaming. Hence, a trade-off between the frequency at which data images are collected and the accuracy of the cell tracking algorithms needs to be studied. In this paper, we look at the effects of different choices of the time step interval (i.e., the frequency of data acquisition) within the microscope to our existing cell tracking algorithms. We generate several experimental data sets where the true outcomes are known (i.e., the direction of cell migration) by either using an effective chemoattractant or employing no-chemoattractant. We specify a relatively short time step interval (i.e., 30 s) between pictures that are taken at the data generational stage, so that, later on, we may choose some portion of the images to produce datasets with different time step intervals, such as 1 min, 2 min, and so on. We evaluate the accuracy of our cell tracking algorithms to illustrate the effects of these different time step intervals. We establish that there exist certain relationships between the tracking accuracy and the time step interval associated with experimental microscope data acquisition. We perform fully-automatic adaptive cell tracking on multiple datasets, to identify optimal time step intervals for data acquisition, while at the same time demonstrating the performance of the computer cell tracking algorithms. Full article
Show Figures

Figure 1

19 pages, 5617 KiB  
Article
Fully Automated 3D Cardiac MRI Localisation and Segmentation Using Deep Neural Networks
by Sulaiman Vesal, Andreas Maier and Nishant Ravikumar
J. Imaging 2020, 6(7), 65; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070065 - 06 Jul 2020
Cited by 18 | Viewed by 5388
Abstract
Cardiac magnetic resonance (CMR) imaging is used widely for morphological assessment and diagnosis of various cardiovascular diseases. Deep learning approaches based on 3D fully convolutional networks (FCNs), have improved state-of-the-art segmentation performance in CMR images. However, previous methods have employed several pre-processing steps [...] Read more.
Cardiac magnetic resonance (CMR) imaging is used widely for morphological assessment and diagnosis of various cardiovascular diseases. Deep learning approaches based on 3D fully convolutional networks (FCNs), have improved state-of-the-art segmentation performance in CMR images. However, previous methods have employed several pre-processing steps and have focused primarily on segmenting low-resolutions images. A crucial step in any automatic segmentation approach is to first localize the cardiac structure of interest within the MRI volume, to reduce false positives and computational complexity. In this paper, we propose two strategies for localizing and segmenting the heart ventricles and myocardium, termed multi-stage and end-to-end, using a 3D convolutional neural network. Our method consists of an encoder–decoder network that is first trained to predict a coarse localized density map of the target structure at a low resolution. Subsequently, a second similar network employs this coarse density map to crop the image at a higher resolution, and consequently, segment the target structure. For the latter, the same two-stage architecture is trained end-to-end. The 3D U-Net with some architectural changes (referred to as 3D DR-UNet) was used as the base architecture in this framework for both the multi-stage and end-to-end strategies. Moreover, we investigate whether the incorporation of coarse features improves the segmentation. We evaluate the two proposed segmentation strategies on two cardiac MRI datasets, namely, the Automatic Cardiac Segmentation Challenge (ACDC) STACOM 2017, and Left Atrium Segmentation Challenge (LASC) STACOM 2018. Extensive experiments and comparisons with other state-of-the-art methods indicate that the proposed multi-stage framework consistently outperforms the rest in terms of several segmentation metrics. The experimental results highlight the robustness of the proposed approach, and its ability to generate accurate high-resolution segmentations, despite the presence of varying degrees of pathology-induced changes to cardiac morphology and image appearance, low contrast, and noise in the CMR volumes. Full article
Show Figures

Figure 1

16 pages, 4857 KiB  
Article
Optimization of Breast Tomosynthesis Visualization through 3D Volume Rendering
by Ana M. Mota, Matthew J. Clarkson, Pedro Almeida and Nuno Matela
J. Imaging 2020, 6(7), 64; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070064 - 04 Jul 2020
Viewed by 3042
Abstract
3D volume rendering may represent a complementary option in the visualization of Digital Breast Tomosynthesis (DBT) examinations by providing an understanding of the underlying data at once. Rendering parameters directly influence the quality of rendered images. The purpose of this work is to [...] Read more.
3D volume rendering may represent a complementary option in the visualization of Digital Breast Tomosynthesis (DBT) examinations by providing an understanding of the underlying data at once. Rendering parameters directly influence the quality of rendered images. The purpose of this work is to study the influence of two of these parameters (voxel dimension in z direction and sampling distance) on DBT rendered data. Both parameters were studied with a real phantom and one clinical DBT data set. The voxel size was changed from 0.085 × 0.085 × 1.0 mm3 to 0.085 × 0.085 × 0.085 mm3 using ten interpolation functions available in the Visualization Toolkit library (VTK) and several sampling distance values were evaluated. The results were investigated at 90º using volume rendering visualization with composite technique. For phantom quantitative analysis, degree of smoothness, contrast-to-noise ratio, and full width at half maximum of a Gaussian curve fitted to the profile of one disk were used. Additionally, the time required for each visualization was also recorded. Hamming interpolation function presented the best compromise in image quality. The sampling distance values that showed a better balance between time and image quality were 0.025 mm and 0.05 mm. With the appropriate rendering parameters, a significant improvement in rendered images was achieved. Full article
Show Figures

Figure 1

14 pages, 4194 KiB  
Article
Evaluation of the Weighted Mean X-ray Energy for an Imaging System Via Propagation-Based Phase-Contrast Imaging
by Maria Seifert, Mareike Weule, Silvia Cipiccia, Silja Flenner, Johannes Hagemann, Veronika Ludwig, Thilo Michel, Paul Neumayer, Max Schuster, Andreas Wolf, Gisela Anton, Stefan Funk and Bernhard Akstaller
J. Imaging 2020, 6(7), 63; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070063 - 03 Jul 2020
Cited by 5 | Viewed by 4230
Abstract
For imaging events of extremely short duration, like shock waves or explosions, it is necessary to be able to image the object with a single-shot exposure. A suitable setup is given by a laser-induced X-ray source such as the one that can be [...] Read more.
For imaging events of extremely short duration, like shock waves or explosions, it is necessary to be able to image the object with a single-shot exposure. A suitable setup is given by a laser-induced X-ray source such as the one that can be found at GSI (Helmholtzzentrum für Schwerionenforschung GmbH) in Darmstadt (Society for Heavy Ion Research), Germany. There, it is possible to direct a pulse from the high-energy laser Petawatt High Energy Laser for Heavy Ion eXperiments (PHELIX) on a tungsten wire to generate a picosecond polychromatic X-ray pulse, called backlighter. For grating-based single-shot phase-contrast imaging of shock waves or exploding wires, it is important to know the weighted mean energy of the X-ray spectrum for choosing a suitable setup. In propagation-based phase-contrast imaging the knowledge of the weighted mean energy is necessary to be able to reconstruct quantitative phase images of unknown objects. Hence, we developed a method to evaluate the weighted mean energy of the X-ray backlighter spectrum using propagation-based phase-contrast images. In a first step wave-field simulations are performed to verify the results. Furthermore, our evaluation is cross-checked with monochromatic synchrotron measurements with known energy at Diamond Light Source (DLS, Didcot, UK) for proof of concepts. Full article
Show Figures

Figure 1

15 pages, 4061 KiB  
Article
MH-MetroNet—A Multi-Head CNN for Passenger-Crowd Attendance Estimation
by Pier Luigi Mazzeo, Riccardo Contino, Paolo Spagnolo, Cosimo Distante, Ettore Stella, Massimiliano Nitti and Vito Renò
J. Imaging 2020, 6(7), 62; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070062 - 02 Jul 2020
Cited by 9 | Viewed by 4575
Abstract
Knowing an accurate passengers attendance estimation on each metro car contributes to the safely coordination and sorting the crowd-passenger in each metro station. In this work we propose a multi-head Convolutional Neural Network (CNN) architecture trained to infer an estimation of passenger attendance [...] Read more.
Knowing an accurate passengers attendance estimation on each metro car contributes to the safely coordination and sorting the crowd-passenger in each metro station. In this work we propose a multi-head Convolutional Neural Network (CNN) architecture trained to infer an estimation of passenger attendance in a metro car. The proposed network architecture consists of two main parts: a convolutional backbone, which extracts features over the whole input image, and a multi-head layers able to estimate a density map, needed to predict the number of people within the crowd image. The network performance is first evaluated on publicly available crowd counting datasets, including the ShanghaiTech part_A, ShanghaiTech part_B and UCF_CC_50, and then trained and tested on our dataset acquired in subway cars in Italy. In both cases a comparison is made against the most relevant and latest state of the art crowd counting architectures, showing that our proposed MH-MetroNet architecture outperforms in terms of Mean Absolute Error (MAE) and Mean Square Error (MSE) and passenger-crowd people number prediction. Full article
Show Figures

Figure 1

19 pages, 3697 KiB  
Article
Tracking of Deformable Objects Using Dynamically and Robustly Updating Pictorial Structures
by Connor Charles Ratcliffe and Ognjen Arandjelović
J. Imaging 2020, 6(7), 61; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070061 - 02 Jul 2020
Viewed by 2307
Abstract
The problem posed by complex, articulated or deformable objects has been at the focus of much tracking research for a considerable length of time. However, it remains a major challenge, fraught with numerous difficulties. The increased ubiquity of technology in all realms of [...] Read more.
The problem posed by complex, articulated or deformable objects has been at the focus of much tracking research for a considerable length of time. However, it remains a major challenge, fraught with numerous difficulties. The increased ubiquity of technology in all realms of our society has made the need for effective solutions all the more urgent. In this article, we describe a novel method which systematically addresses the aforementioned difficulties and in practice outperforms the state of the art. Global spatial flexibility and robustness to deformations are achieved by adopting a pictorial structure based geometric model, and localized appearance changes by a subspace based model of part appearance underlain by a gradient based representation. In addition to one-off learning of both the geometric constraints and part appearances, we introduce a continuing learning framework which implements information discounting i.e., the discarding of historical appearances in favour of the more recent ones. Moreover, as a means of ensuring robustness to transient occlusions (including self-occlusions), we propose a solution for detecting unlikely appearance changes which allows for unreliable data to be rejected. A comprehensive evaluation of the proposed method, the analysis and discussing of findings, and a comparison with several state-of-the-art methods demonstrates the major superiority of our algorithm. Full article
Show Figures

Figure 1

21 pages, 3384 KiB  
Article
Multi-Focus Image Fusion: Algorithms, Evaluation, and a Library
by Rabia Zafar, Muhammad Shahid Farid and Muhammad Hassan Khan
J. Imaging 2020, 6(7), 60; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070060 - 02 Jul 2020
Cited by 13 | Viewed by 5129
Abstract
Image fusion is a process that integrates similar types of images collected from heterogeneous sources into one image in which the information is more definite and certain. Hence, the resultant image is anticipated as more explanatory and enlightening both for human and machine [...] Read more.
Image fusion is a process that integrates similar types of images collected from heterogeneous sources into one image in which the information is more definite and certain. Hence, the resultant image is anticipated as more explanatory and enlightening both for human and machine perception. Different image combination methods have been presented to consolidate significant data from a collection of images into one image. As a result of its applications and advantages in variety of fields such as remote sensing, surveillance, and medical imaging, it is significant to comprehend image fusion algorithms and have a comparative study on them. This paper presents a review of the present state-of-the-art and well-known image fusion techniques. The performance of each algorithm is assessed qualitatively and quantitatively on two benchmark multi-focus image datasets. We also produce a multi-focus image fusion dataset by collecting the widely used test images in different studies. The quantitative evaluation of fusion results is performed using a set of image fusion quality assessment metrics. The performance is also evaluated using different statistical measures. Another contribution of this paper is the proposal of a multi-focus image fusion library, to the best of our knowledge, no such library exists so far. The library provides implementation of numerous state-of-the-art image fusion algorithms and is made available publicly at project website. Full article
Show Figures

Figure 1

18 pages, 6956 KiB  
Article
μXRF Mapping as a Powerful Technique for Investigating Metal Objects from the Archaeological Site of Ferento (Central Italy)
by Giuseppe Capobianco, Adriana Sferragatta, Luca Lanteri, Giorgia Agresti, Giuseppe Bonifazi, Silvia Serranti and Claudia Pelosi
J. Imaging 2020, 6(7), 59; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070059 - 30 Jun 2020
Cited by 6 | Viewed by 3196
Abstract
This research concerns the application of micro X-ray fluorescence (µXRF) mapping to the investigation of a group of selected metal objects from the archaeological site of Ferento, a Roman and then medieval town in Central Italy. Specifically, attention was focused on two test [...] Read more.
This research concerns the application of micro X-ray fluorescence (µXRF) mapping to the investigation of a group of selected metal objects from the archaeological site of Ferento, a Roman and then medieval town in Central Italy. Specifically, attention was focused on two test pits, named IV and V, in which metal objects were found, mainly pertaining to the medieval period and never investigated before the present work from a compositional point of view. The potentiality of µXRF mapping was tested through a Bruker Tornado M4 equipped with an Rh tube, operating at 50 kV, 500 μA, and spot 25 μm obtained with polycapillary optics. Principal component analysis (PCA) and multivariate curve resolution (MCR) were used for processing the X-ray fluorescence spectra. The results showed that the investigated items are characterized by different compositions in terms of chemical elements. Three little wheels are made of lead, while the fibulae are made of copper-based alloys with varying amounts of tin, zinc, and lead. Only one ring is iron-based, and the other objects, namely a spatula and an applique, are also made of copper-based alloys, but with different relative amounts of the main elements. In two objects, traces of gold were found, suggesting the precious character of these pieces. MCR analysis was demonstrated to be particularly useful to confirm the presence of trace elements, such as gold, as it could differentiate the signals related to minor elements from those due to major chemical elements. Full article
(This article belongs to the Special Issue Robust Image Processing)
Show Figures

Figure 1

10 pages, 1275 KiB  
Concept Paper
Restoration of Uterine Cavity Measurements after Surgical Correction
by Laura Detti, Mary Emily Christiansen, Roberto Levi D’Ancona, Jennifer C. Gordon, Nicole Van de Velde and Irene Peregrin-Alvarez
J. Imaging 2020, 6(7), 58; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070058 - 29 Jun 2020
Cited by 2 | Viewed by 3775
Abstract
Objective: We sought to define the uterine and uterine cavity dimensions of subseptate uteri before and after hysteroscopic surgical incision, and compare them to those obtained in normal uteri with 3-D ultrasound. Methods: Two cohorts of consecutive women with normal-appearing uterine [...] Read more.
Objective: We sought to define the uterine and uterine cavity dimensions of subseptate uteri before and after hysteroscopic surgical incision, and compare them to those obtained in normal uteri with 3-D ultrasound. Methods: Two cohorts of consecutive women with normal-appearing uterine cavity and women diagnosed with uterine subseptations, before and after undergoing hysteroscopic incision. 3-D ultrasound was used to measure the uterine cavity width, length, and area on a frozen coronal view of the uterus. Results: A total of 215 women were included: 89 in the normal, and 126 in the subseptate uterus, groups. Uterine length and height were similar in the pre-operative, post-operative subseptate uteri, and in the normal uteri, while the uterine width was significantly greater in the pre-operative (5.1 + 0.8 cm) than post-operative (4.7 + 0.8 cm) and normal uterus (4.6 + 0.7 cm; p < 0.001) groups. The pre-operative uterine cavity length (3.3 + 0.5 cm), width (3.2 + 0.7 cm), and area (4.4 + 1.2 cm2), were significantly greater than the post-operative ones (length 2.9 + 0.4 cm; width 2.6 + 0.6 cm; area 3.7 + 0.8 cm; overall p < 0.001), and became similar to the dimensions of the normal uterus. Of the patients who subsequently conceived, 2.6% miscarried in the corrected subseptation group and 28.8% miscarried in the normal uterus group. Conclusions: We defined the ultrasound dimensions of the uterine cavity in subseptate uteri and their change after surgical correction. Uterine cavity length, width, and area show very little variability in adult normal uteri, while they are increased in uteri with a subseptation greater than 5.9 mm in length, and regain normal measurements after surgical correction. Full article
Show Figures

Figure 1

22 pages, 4563 KiB  
Article
Analyzing Age-Related Macular Degeneration Progression in Patients with Geographic Atrophy Using Joint Autoencoders for Unsupervised Change Detection
by Guillaume Dupont, Ekaterina Kalinicheva, Jérémie Sublime, Florence Rossant and Michel Pâques
J. Imaging 2020, 6(7), 57; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070057 - 29 Jun 2020
Cited by 7 | Viewed by 3148
Abstract
Age-Related Macular Degeneration (ARMD) is a progressive eye disease that slowly causes patients to go blind. For several years now, it has been an important research field to try to understand how the disease progresses and find effective medical treatments. Researchers have been [...] Read more.
Age-Related Macular Degeneration (ARMD) is a progressive eye disease that slowly causes patients to go blind. For several years now, it has been an important research field to try to understand how the disease progresses and find effective medical treatments. Researchers have been mostly interested in studying the evolution of the lesions using different techniques ranging from manual annotation to mathematical models of the disease. However, artificial intelligence for ARMD image analysis has become one of the main research focuses to study the progression of the disease, as accurate manual annotation of its evolution has proved difficult using traditional methods even for experienced practicians. In this paper, we propose a deep learning architecture that can detect changes in the eye fundus images and assess the progression of the disease. Our method is based on joint autoencoders and is fully unsupervised. Our algorithm has been applied to pairs of images from different eye fundus images time series of 24 ARMD patients. Our method has been shown to be quite effective when compared with other methods from the literature, including non-neural network based algorithms that still are the current standard to follow the disease progression and change detection methods from other fields. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Show Figures

Figure 1

14 pages, 3157 KiB  
Article
Measuring Thickness-Dependent Relative Light Yield and Detection Efficiency of Scintillator Screens
by William C. Chuirazzi and Aaron E. Craft
J. Imaging 2020, 6(7), 56; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6070056 - 29 Jun 2020
Cited by 10 | Viewed by 3171
Abstract
Digital camera-based neutron imaging systems consisting of a neutron scintillator screen optically coupled to a digital camera are the most common digital neutron imaging system used in the neutron imaging community and are available at any state-of-the-art imaging facility world-wide. Neutron scintillator screens [...] Read more.
Digital camera-based neutron imaging systems consisting of a neutron scintillator screen optically coupled to a digital camera are the most common digital neutron imaging system used in the neutron imaging community and are available at any state-of-the-art imaging facility world-wide. Neutron scintillator screens are the integral component of these imaging system that directly interacts with the neutron beam and dictates the neutron capture efficiency and image quality limitations of the imaging system. This work describes a novel approach for testing neutron scintillators that provides a simple and efficient way to measure relative light yield and detection efficiency over a range of scintillator thicknesses using a single scintillator screen and only a few radiographs. Additionally, two methods for correlating the screen thickness to the measured data were implemented and compared. An example 6LiF:ZnS scintillator screen with nominal thicknesses ranging from 0–300 μm was used to demonstrate this approach. The multi-thickness screen and image and data processing methods are not exclusive to neutron scintillator screens but could be applied to X-ray imaging as well. This approach has the potential to benefit the entire radiographic imaging community by offering an efficient path forward for manufacturers to develop higher-performance scintillators and for imaging facilities and service providers to determine the optimal screen parameters for their particular beam and imaging system. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop