Next Issue
Volume 7, March
Previous Issue
Volume 7, January
 
 

J. Imaging, Volume 7, Issue 2 (February 2021) – 29 articles

Cover Story (view full-size image): Nuclear magnetic resonance (NMR) relaxometry is an essential non-invasive and non-destructive tool to study porous media’s properties and the saturating fluids’ behavior, with a wide range of applications: cements, reservoir rocks, foods. However, especially for two-dimensional NMR (2DNMR) experiments, long inversion times caused by the large data size, together with high sensitivity of the solution to data noise, still represent significant issues. We present a 2DNMR data inversion method combining the truncated singular value decomposition and Tikhonov regularization to accelerate the inversion process and reduce the sensitivity to the regularization parameter value. The quality of 2DNMR relaxation time distributions and the increased computational efficiency obtained on synthetic and real 2DNMR data motivate the extension of such an approach to higher-dimensional problems. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 1164 KiB  
Article
The Quantum Nature of Color Perception: Uncertainty Relations for Chromatic Opposition
by Michel Berthier and Edoardo Provenzi
J. Imaging 2021, 7(2), 40; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020040 - 22 Feb 2021
Cited by 4 | Viewed by 2405
Abstract
In this paper, we provide an overview on the foundation and first results of a very recent quantum theory of color perception, together with novel results about uncertainty relations for chromatic opposition. The major inspiration for this model is the 1974 remarkable work [...] Read more.
In this paper, we provide an overview on the foundation and first results of a very recent quantum theory of color perception, together with novel results about uncertainty relations for chromatic opposition. The major inspiration for this model is the 1974 remarkable work by H.L. Resnikoff, who had the idea to give up the analysis of the space of perceived colors through metameric classes of spectra in favor of the study of its algebraic properties. This strategy permitted to reveal the importance of hyperbolic geometry in colorimetry. Starting from these premises, we show how Resnikoff’s construction can be extended to a geometrically rich quantum framework, where the concepts of achromatic color, hue and saturation can be rigorously defined. Moreover, the analysis of pure and mixed quantum chromatic states leads to a deep understanding of chromatic opposition and its role in the encoding of visual signals. We complete our paper by proving the existence of uncertainty relations for the degree of chromatic opposition, thus providing a theoretical confirmation of the quantum nature of color perception. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Journal of Imaging Editorial Board Members)
Show Figures

Figure 1

17 pages, 370 KiB  
Review
Performance Overview of the Latest Video Coding Proposals: HEVC, JEM and VVC
by Miguel O. Martínez-Rach, Héctor Migallón, Otoniel López-Granado, Vicente Galiano and Manuel P. Malumbres
J. Imaging 2021, 7(2), 39; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020039 - 22 Feb 2021
Cited by 5 | Viewed by 2372
Abstract
The audiovisual entertainment industry has entered a race to find the video encoder offering the best Rate/Distortion (R/D) performance for high-quality high-definition video content. The challenge consists in providing a moderate to low computational/hardware complexity encoder able to run Ultra High-Definition (UHD) video [...] Read more.
The audiovisual entertainment industry has entered a race to find the video encoder offering the best Rate/Distortion (R/D) performance for high-quality high-definition video content. The challenge consists in providing a moderate to low computational/hardware complexity encoder able to run Ultra High-Definition (UHD) video formats of different flavours (360°, AR/VR, etc.) with state-of-the-art R/D performance results. It is necessary to evaluate not only R/D performance, a highly important feature, but also the complexity of future video encoders. New coding tools offering a small increase in R/D performance at the cost of greater complexity are being advanced with caution. We performed a detailed analysis of two evolutions of High Efficiency Video Coding (HEVC) video standards, Joint Exploration Model (JEM) and Versatile Video Coding (VVC), in terms of both R/D performance and complexity. The results show how VVC, which represents the new direction of future standards, has, for the time being, sacrificed R/D performance in order to significantly reduce overall coding/decoding complexity. Full article
(This article belongs to the Special Issue New and Specialized Methods of Image Compression)
Show Figures

Figure 1

18 pages, 1343 KiB  
Article
Data-Driven Regularization Parameter Selection in Dynamic MRI
by Matti Hanhela, Olli Gröhn, Mikko Kettunen, Kati Niinimäki, Marko Vauhkonen and Ville Kolehmainen
J. Imaging 2021, 7(2), 38; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020038 - 20 Feb 2021
Cited by 1 | Viewed by 1997
Abstract
In dynamic MRI, sufficient temporal resolution can often only be obtained using imaging protocols which produce undersampled data for each image in the time series. This has led to the popularity of compressed sensing (CS) based reconstructions. One problem in CS approaches is [...] Read more.
In dynamic MRI, sufficient temporal resolution can often only be obtained using imaging protocols which produce undersampled data for each image in the time series. This has led to the popularity of compressed sensing (CS) based reconstructions. One problem in CS approaches is determining the regularization parameters, which control the balance between data fidelity and regularization. We propose a data-driven approach for the total variation regularization parameter selection, where reconstructions yield expected sparsity levels in the regularization domains. The expected sparsity levels are obtained from the measurement data for temporal regularization and from a reference image for spatial regularization. Two formulations are proposed. Simultaneous search for a parameter pair yielding expected sparsity in both domains (S-surface), and a sequential parameter selection using the S-curve method (Sequential S-curve). The approaches are evaluated using simulated and experimental DCE-MRI. In the simulated test case, both methods produce a parameter pair and reconstruction that is close to the root mean square error (RMSE) optimal pair and reconstruction. In the experimental test case, the methods produce almost equal parameter selection, and the reconstructions are of high perceived quality. Both methods lead to a highly feasible selection of the regularization parameters in both test cases while the sequential method is computationally more efficient. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

13 pages, 2053 KiB  
Article
Active Learning with Bayesian UNet for Efficient Semantic Image Segmentation
by Isah Charles Saidu and Lehel Csató
J. Imaging 2021, 7(2), 37; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020037 - 17 Feb 2021
Cited by 11 | Viewed by 4232
Abstract
We present a sample-efficient image segmentation method using active learning, we call it Active Bayesian UNet, or AB-UNet. This is a convolutional neural network using batch normalization and max-pool dropout. The Bayesian setup is achieved by exploiting the probabilistic extension of the dropout [...] Read more.
We present a sample-efficient image segmentation method using active learning, we call it Active Bayesian UNet, or AB-UNet. This is a convolutional neural network using batch normalization and max-pool dropout. The Bayesian setup is achieved by exploiting the probabilistic extension of the dropout mechanism, leading to the possibility to use the uncertainty inherently present in the system. We set up our experiments on various medical image datasets and highlight that with a smaller annotation effort our AB-UNet leads to stable training and better generalization. Added to this, we can efficiently choose from an unlabelled dataset. Full article
Show Figures

Figure 1

21 pages, 9628 KiB  
Article
A Model-Based Optimization Framework for Iterative Digital Breast Tomosynthesis Image Reconstruction
by Elena Loli Piccolomini and Elena Morotti
J. Imaging 2021, 7(2), 36; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020036 - 13 Feb 2021
Cited by 8 | Viewed by 2905
Abstract
Digital Breast Tomosynthesis is an X-ray imaging technique that allows a volumetric reconstruction of the breast, from a small number of low-dose two-dimensional projections. Although it is already used in the clinical setting, enhancing the quality of the recovered images is still a [...] Read more.
Digital Breast Tomosynthesis is an X-ray imaging technique that allows a volumetric reconstruction of the breast, from a small number of low-dose two-dimensional projections. Although it is already used in the clinical setting, enhancing the quality of the recovered images is still a subject of research. The aim of this paper was to propose and compare, in a general optimization framework, three slightly different models and corresponding accurate iterative algorithms for Digital Breast Tomosynthesis image reconstruction, characterized by a convergent behavior. The suggested model-based implementations are specifically aligned to Digital Breast Tomosynthesis clinical requirements and take advantage of a Total Variation regularizer. We also tune a fully-automatic strategy to set a proper regularization parameter. We assess our proposals on real data, acquired from a breast accreditation phantom and a clinical case. The results confirm the effectiveness of the presented framework in reconstructing breast volumes, with particular focus on the masses and microcalcifications, in few iterations and in enhancing the image quality in a prolonged execution. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

10 pages, 695 KiB  
Article
Accelerating 3D Medical Image Segmentation by Adaptive Small-Scale Target Localization
by Boris Shirokikh, Alexey Shevtsov, Alexandra Dalechina, Egor Krivov, Valery Kostjuchenko, Andrey Golanov, Victor Gombolevskiy, Sergey Morozov and Mikhail Belyaev
J. Imaging 2021, 7(2), 35; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020035 - 13 Feb 2021
Cited by 9 | Viewed by 3161
Abstract
The prevailing approach for three-dimensional (3D) medical image segmentation is to use convolutional networks. Recently, deep learning methods have achieved human-level performance in several important applied problems, such as volumetry for lung-cancer diagnosis or delineation for radiation therapy planning. However, state-of-the-art architectures, such [...] Read more.
The prevailing approach for three-dimensional (3D) medical image segmentation is to use convolutional networks. Recently, deep learning methods have achieved human-level performance in several important applied problems, such as volumetry for lung-cancer diagnosis or delineation for radiation therapy planning. However, state-of-the-art architectures, such as U-Net and DeepMedic, are computationally heavy and require workstations accelerated with graphics processing units for fast inference. However, scarce research has been conducted concerning enabling fast central processing unit computations for such networks. Our paper fills this gap. We propose a new segmentation method with a human-like technique to segment a 3D study. First, we analyze the image at a small scale to identify areas of interest and then process only relevant feature-map patches. Our method not only reduces the inference time from 10 min to 15 s but also preserves state-of-the-art segmentation quality, as we illustrate in the set of experiments with two large datasets. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Show Figures

Figure 1

14 pages, 978 KiB  
Review
Radiomics and Prostate MRI: Current Role and Future Applications
by Giuseppe Cutaia, Giuseppe La Tona, Albert Comelli, Federica Vernuccio, Francesco Agnello, Cesare Gagliardo, Leonardo Salvaggio, Natale Quartuccio, Letterio Sturiale, Alessandro Stefano, Mauro Calamia, Gaspare Arnone, Massimo Midiri and Giuseppe Salvaggio
J. Imaging 2021, 7(2), 34; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020034 - 11 Feb 2021
Cited by 34 | Viewed by 5869
Abstract
Multiparametric prostate magnetic resonance imaging (mpMRI) is widely used as a triage test for men at a risk of prostate cancer. However, the traditional role of mpMRI was confined to prostate cancer staging. Radiomics is the quantitative extraction and analysis of minable data [...] Read more.
Multiparametric prostate magnetic resonance imaging (mpMRI) is widely used as a triage test for men at a risk of prostate cancer. However, the traditional role of mpMRI was confined to prostate cancer staging. Radiomics is the quantitative extraction and analysis of minable data from medical images; it is emerging as a promising tool to detect and categorize prostate lesions. In this paper we review the role of radiomics applied to prostate mpMRI in detection and localization of prostate cancer, prediction of Gleason score and PI-RADS classification, prediction of extracapsular extension and of biochemical recurrence. We also provide a future perspective of artificial intelligence (machine learning and deep learning) applied to the field of prostate cancer. Full article
(This article belongs to the Special Issue Radiomics and Texture Analysis in Medical Imaging)
Show Figures

Figure 1

19 pages, 2111 KiB  
Article
No Matter What Images You Share, You Can Probably Be Fingerprinted Anyway
by Rahimeh Rouhi, Flavio Bertini and Danilo Montesi
J. Imaging 2021, 7(2), 33; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020033 - 11 Feb 2021
Cited by 5 | Viewed by 2187
Abstract
The popularity of social networks (SNs), amplified by the ever-increasing use of smartphones, has intensified online cybercrimes. This trend has accelerated digital forensics through SNs. One of the areas that has received lots of attention is camera fingerprinting, through which each smartphone is [...] Read more.
The popularity of social networks (SNs), amplified by the ever-increasing use of smartphones, has intensified online cybercrimes. This trend has accelerated digital forensics through SNs. One of the areas that has received lots of attention is camera fingerprinting, through which each smartphone is uniquely characterized. Hence, in this paper, we compare classification-based methods to achieve smartphone identification (SI) and user profile linking (UPL) within the same or across different SNs, which can provide investigators with significant clues. We validate the proposed methods by two datasets, our dataset and the VISION dataset, both including original and shared images on the SN platforms such as Google Currents, Facebook, WhatsApp, and Telegram. The obtained results show that k-medoids achieves the best results compared with k-means, hierarchical approaches, and different models of convolutional neural network (CNN) in the classification of the images. The results show that k-medoids provides the values of F1-measure up to 0.91% for SI and UPL tasks. Moreover, the results prove the effectiveness of the methods which tackle the loss of image details through the compression process on the SNs, even for the images from the same model of smartphones. An important outcome of our work is presenting the inter-layer UPL task, which is more desirable in digital investigations as it can link user profiles on different SNs. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

11 pages, 2416 KiB  
Article
Clinical Utility of Artificial Intelligence Algorithms to Enhance Wide-Field Optical Coherence Tomography Angiography Images
by Orlaith Mc Grath, Mohammad W. Sarfraz, Abha Gupta, Yan Yang and Tariq Aslam
J. Imaging 2021, 7(2), 32; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020032 - 10 Feb 2021
Cited by 9 | Viewed by 5680
Abstract
The aim of this paper is to investigate the clinical utility of the application of deep learning denoise algorithms on standard wide-field Optical Coherence Tomography Angiography (OCT-A) images. This was a retrospective case-series assessing forty-nine 10 × 10 mm OCT-A1 macula scans of [...] Read more.
The aim of this paper is to investigate the clinical utility of the application of deep learning denoise algorithms on standard wide-field Optical Coherence Tomography Angiography (OCT-A) images. This was a retrospective case-series assessing forty-nine 10 × 10 mm OCT-A1 macula scans of 49 consecutive patients attending a medical retina clinic over a 6-month period. Thirty-seven patients had pathology; 13 had none. Retinal vascular layers were categorised into superficial or deep capillary plexus. For each category, the retinal experts compared the original standard image with the same image that had intelligent denoise applied. When analysing the Superficial Capillary Plexus (SCP), the denoised image was selected as “best for clinical assessment” in 98% of comparisons. No difference was established in the remaining 2%. On evaluating the Deep Capillary Plexus (DCP), the denoised image was preferred in 35% of comparisons. No difference was found in 65%. There was no evidence of new artefactual features nor loss of anatomical detail in denoised compared to the standard images. The wide-field denoise feature of the Canon Xephilio OCT-A1 produced scans that were clinically preferable over their original OCT-A images, especially for SCP assessment, without evidence for causing a new artefactual error. Full article
Show Figures

Graphical abstract

14 pages, 2289 KiB  
Article
Domain Adaptation for Medical Image Segmentation: A Meta-Learning Method
by Penghao Zhang, Jiayue Li, Yining Wang and Judong Pan
J. Imaging 2021, 7(2), 31; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020031 - 10 Feb 2021
Cited by 14 | Viewed by 4694
Abstract
Convolutional neural networks (CNNs) have demonstrated great achievement in increasing the accuracy and stability of medical image segmentation. However, existing CNNs are limited by the problem of dependency on the availability of training data owing to high manual annotation costs and privacy issues. [...] Read more.
Convolutional neural networks (CNNs) have demonstrated great achievement in increasing the accuracy and stability of medical image segmentation. However, existing CNNs are limited by the problem of dependency on the availability of training data owing to high manual annotation costs and privacy issues. To counter this limitation, domain adaptation (DA) and few-shot learning have been extensively studied. Inspired by these two categories of approaches, we propose an optimization-based meta-learning method for segmentation tasks. Even though existing meta-learning methods use prior knowledge to choose parameters that generalize well from few examples, these methods limit the diversity of the task distribution that they can learn from in medical image segmentation. In this paper, we propose a meta-learning algorithm to augment the existing algorithms with the capability to learn from diverse segmentation tasks across the entire task distribution. Specifically, our algorithm aims to learn from the diversity of image features which characterize a specific tissue type while showing diverse signal intensities. To demonstrate the effectiveness of the proposed algorithm, we conducted experiments using a diverse set of segmentation tasks from the Medical Segmentation Decathlon and two meta-learning benchmarks: model-agnostic meta-learning (MAML) and Reptile. U-Net and Dice similarity coefficient (DSC) were selected as the baseline model and the main performance metric, respectively. The experimental results show that our algorithm maximally surpasses MAML and Reptile by 2% and 2.4% respectively, in terms of the DSC. By showing a consistent improvement in subjective measures, we can also infer that our algorithm can produce a better generalization of a target task that has few examples. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Show Figures

Figure 1

13 pages, 5464 KiB  
Technical Note
Self-Supervised Learning of Satellite-Derived Vegetation Indices for Clustering and Visualization of Vegetation Types
by Ram C. Sharma and Keitarou Hara
J. Imaging 2021, 7(2), 30; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020030 - 08 Feb 2021
Cited by 3 | Viewed by 1851
Abstract
Vegetation indices are commonly used techniques for the retrieval of biophysical and chemical attributes of vegetation. This paper presents the potential of an Autoencoders (AEs) and Convolutional Autoencoders (CAEs)-based self-supervised learning approach for the decorrelation and dimensionality reduction of high-dimensional vegetation indices derived [...] Read more.
Vegetation indices are commonly used techniques for the retrieval of biophysical and chemical attributes of vegetation. This paper presents the potential of an Autoencoders (AEs) and Convolutional Autoencoders (CAEs)-based self-supervised learning approach for the decorrelation and dimensionality reduction of high-dimensional vegetation indices derived from satellite observations. This research was implemented in Mt. Zao and its base in northeast Japan with a cool temperate climate by collecting the ground truth points belonging to 16 vegetation types (including some non-vegetation classes) in 2018. Monthly median composites of 16 vegetation indices were generated by processing all Sentinel-2 scenes available for the study area from 2017 to 2019. The performance of AEs and CAEs-based compressed images for the clustering and visualization of vegetation types was quantitatively assessed by computing the bootstrap resampling-based confidence interval. The AEs and CAEs-based compressed images with three features showed around 4% and 9% improvements in the confidence intervals respectively over the classical method. CAEs using convolutional neural networks showed better feature extraction and dimensionality reduction capacity than the AEs. The class-wise performance analysis also showed the superiority of the CAEs. This research highlights the potential of AEs and CAEs for attaining a fine clustering and visualization of vegetation types. Full article
Show Figures

Figure 1

21 pages, 2847 KiB  
Article
No-Reference Image Quality Assessment with Global Statistical Features
by Domonkos Varga
J. Imaging 2021, 7(2), 29; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020029 - 05 Feb 2021
Cited by 22 | Viewed by 5027
Abstract
The perceptual quality of digital images is often deteriorated during storage, compression, and transmission. The most reliable way of assessing image quality is to ask people to provide their opinions on a number of test images. However, this is an expensive and time-consuming [...] Read more.
The perceptual quality of digital images is often deteriorated during storage, compression, and transmission. The most reliable way of assessing image quality is to ask people to provide their opinions on a number of test images. However, this is an expensive and time-consuming process which cannot be applied in real-time systems. In this study, a novel no-reference image quality assessment method is proposed. The introduced method uses a set of novel quality-aware features which globally characterizes the statistics of a given test image, such as extended local fractal dimension distribution feature, extended first digit distribution features using different domains, Bilaplacian features, image moments, and a wide variety of perceptual features. Experimental results are demonstrated on five publicly available benchmark image quality assessment databases: CSIQ, MDID, KADID-10k, LIVE In the Wild, and KonIQ-10k. Full article
(This article belongs to the Special Issue Image and Video Quality Assessment)
Show Figures

Figure 1

35 pages, 4887 KiB  
Review
Noncontact Sensing of Contagion
by Fatema-Tuz-Zohra Khanam, Loris A. Chahl, Jaswant S. Chahl, Ali Al-Naji, Asanka G. Perera, Danyi Wang, Y.H. Lee, Titilayo T. Ogunwa, Samuel Teague, Tran Xuan Bach Nguyen, Timothy D. McIntyre, Simon P. Pegoli, Yiting Tao, John L. McGuire, Jasmine Huynh and Javaan Chahl
J. Imaging 2021, 7(2), 28; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020028 - 05 Feb 2021
Cited by 10 | Viewed by 5139
Abstract
The World Health Organization (WHO) has declared COVID-19 a pandemic. We review and reduce the clinical literature on diagnosis of COVID-19 through symptoms that might be remotely detected as of early May 2020. Vital signs associated with respiratory distress and fever, coughing, and [...] Read more.
The World Health Organization (WHO) has declared COVID-19 a pandemic. We review and reduce the clinical literature on diagnosis of COVID-19 through symptoms that might be remotely detected as of early May 2020. Vital signs associated with respiratory distress and fever, coughing, and visible infections have been reported. Fever screening by temperature monitoring is currently popular. However, improved noncontact detection is sought. Vital signs including heart rate and respiratory rate are affected by the condition. Cough, fatigue, and visible infections are also reported as common symptoms. There are non-contact methods for measuring vital signs remotely that have been shown to have acceptable accuracy, reliability, and practicality in some settings. Each has its pros and cons and may perform well in some challenges but be inadequate in others. Our review shows that visible spectrum and thermal spectrum cameras offer the best options for truly noncontact sensing of those studied to date, thermal cameras due to their potential to measure all likely symptoms on a single camera, especially temperature, and video cameras due to their availability, cost, adaptability, and compatibility. Substantial supply chain disruptions during the pandemic and the widespread nature of the problem means that cost-effectiveness and availability are important considerations. Full article
Show Figures

Figure 1

7 pages, 2465 KiB  
Article
Inspection of Transparent Objects with Varying Light Scattering Using a Frangi Filter
by Dieter P. Gruber and Matthias Haselmann
J. Imaging 2021, 7(2), 27; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020027 - 05 Feb 2021
Cited by 2 | Viewed by 1697
Abstract
This paper proposes a new machine vision method to test the quality of a semi-transparent automotive illuminant component. Difference images of Frangi filtered surface images are used to enhance defect-like image structures. In order to distinguish allowed structures from defective structures, morphological features [...] Read more.
This paper proposes a new machine vision method to test the quality of a semi-transparent automotive illuminant component. Difference images of Frangi filtered surface images are used to enhance defect-like image structures. In order to distinguish allowed structures from defective structures, morphological features are extracted and used for a nearest-neighbor-based anomaly score. In this way, it could be demonstrated that a segmentation of occurring defects is possible on transparent illuminant parts. The method turned out to be fast and accurate and is therefore also suited for in-production testing. Full article
Show Figures

Figure 1

17 pages, 1864 KiB  
Article
Personal Heart Health Monitoring Based on 1D Convolutional Neural Network
by Antonella Nannavecchia, Francesco Girardi, Pio Raffaele Fina, Michele Scalera and Giovanni Dimauro
J. Imaging 2021, 7(2), 26; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020026 - 05 Feb 2021
Cited by 21 | Viewed by 3688
Abstract
The automated detection of suspicious anomalies in electrocardiogram (ECG) recordings allows frequent personal heart health monitoring and can drastically reduce the number of ECGs that need to be manually examined by the cardiologists, excluding those classified as normal, facilitating healthcare decision-making and reducing [...] Read more.
The automated detection of suspicious anomalies in electrocardiogram (ECG) recordings allows frequent personal heart health monitoring and can drastically reduce the number of ECGs that need to be manually examined by the cardiologists, excluding those classified as normal, facilitating healthcare decision-making and reducing a considerable amount of time and money. In this paper, we present a system able to automatically detect the suspect of cardiac pathologies in ECG signals from personal monitoring devices, with the aim to alert the patient to send the ECG to the medical specialist for a correct diagnosis and a proper therapy. The main contributes of this work are: (a) the implementation of a binary classifier based on a 1D-CNN architecture for detecting the suspect of anomalies in ECGs, regardless of the kind of cardiac pathology; (b) the analysis was carried out on 21 classes of different cardiac pathologies classified as anomalous; and (c) the possibility to classify anomalies even in ECG segments containing, at the same time, more than one class of cardiac pathologies. Moreover, 1D-CNN based architectures can allow an implementation of the system on cheap smart devices with low computational complexity. The system was tested on the ECG signals from the MIT-BIH ECG Arrhythmia Database for the MLII derivation. Two different experiments were carried out, showing remarkable performance compared to other similar systems. The best result showed high accuracy and recall, computed in terms of ECG segments and even higher accuracy and recall in terms of patients alerted, therefore considering the detection of anomalies with respect to entire ECG recordings. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Show Figures

Figure 1

15 pages, 3321 KiB  
Article
Evaluation of Event-Based Corner Detectors
by Özgün Yılmaz, Camille Simon-Chane and Aymeric Histace
J. Imaging 2021, 7(2), 25; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020025 - 03 Feb 2021
Cited by 11 | Viewed by 2694
Abstract
Bio-inspired Event-Based (EB) cameras are a promising new technology that outperforms standard frame-based cameras in extreme lighted and fast moving scenes. Already, a number of EB corner detection techniques have been developed; however, the performance of these EB corner detectors has only been [...] Read more.
Bio-inspired Event-Based (EB) cameras are a promising new technology that outperforms standard frame-based cameras in extreme lighted and fast moving scenes. Already, a number of EB corner detection techniques have been developed; however, the performance of these EB corner detectors has only been evaluated based on a few author-selected criteria rather than on a unified common basis, as proposed here. Moreover, their experimental conditions are mainly limited to less interesting operational regions of the EB camera (on which frame-based cameras can also operate), and some of the criteria, by definition, could not distinguish if the detector had any systematic bias. In this paper, we evaluate five of the seven existing EB corner detectors on a public dataset including extreme illumination conditions that have not been investigated before. Moreover, this evaluation is the first of its kind in terms of analysing not only such a high number of detectors, but also applying a unified procedure for all. Contrary to previous assessments, we employed both the intensity and trajectory information within the public dataset rather than only one of them. We show that a rigorous comparison among EB detectors can be performed without tedious manual labelling and even with challenging acquisition conditions. This study thus proposes the first standard unified EB corner evaluation procedure, which will enable better understanding of the underlying mechanisms of EB cameras and can therefore lead to more efficient EB corner detection techniques. Full article
Show Figures

Figure 1

22 pages, 8410 KiB  
Article
Real-Time Quality Control of Heat Sealed Bottles Using Thermal Images and Artificial Neural Network
by Samuel Cruz, António Paulino, Joao Duraes and Mateus Mendes
J. Imaging 2021, 7(2), 24; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020024 - 03 Feb 2021
Cited by 8 | Viewed by 2186
Abstract
Quality control of heat sealed bottles is very important to minimize waste and in some cases protect people’s health. The present paper describes a case study where an automated non invasive and non destructive quality control system was designed to assess the quality [...] Read more.
Quality control of heat sealed bottles is very important to minimize waste and in some cases protect people’s health. The present paper describes a case study where an automated non invasive and non destructive quality control system was designed to assess the quality of the seals of bottles containing pesticide. In this case study, the integrity of the seals is evaluated using an artificial neural network based on images of the seals processed with computer vision techniques. Because the seals are not directly visible from the bottle exterior, the images are infrared pictures obtained using a thermal camera. The method is non invasive, automated, and can be applied to common conveyor belts currently used in industrial plants. The results show that the inspection process is effective in identifying defective seals with a precision of 98.6% and a recall of 100% and because it is automated it can be scaled up to large bottle processing plants. Full article
Show Figures

Figure 1

20 pages, 1132 KiB  
Article
Incoherent Radar Imaging for Breast Cancer Detection and Experimental Validation against 3D Multimodal Breast Phantoms
by Antonio Cuccaro, Angela Dell’Aversano, Giuseppe Ruvio, Jacinta Browne and Raffaele Solimene
J. Imaging 2021, 7(2), 23; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020023 - 01 Feb 2021
Cited by 10 | Viewed by 1762
Abstract
In this paper we consider radar approaches for breast cancer detection. The aim is to give a brief review of the main features of incoherent methods, based on beam-forming and Multiple SIgnal Classification (MUSIC) algorithms, that we have recently developed, and to compare [...] Read more.
In this paper we consider radar approaches for breast cancer detection. The aim is to give a brief review of the main features of incoherent methods, based on beam-forming and Multiple SIgnal Classification (MUSIC) algorithms, that we have recently developed, and to compare them with classical coherent beam-forming. Those methods have the remarkable advantage of not requiring antenna characterization/compensation, which can be problematic in view of the close (to the breast) proximity set-up usually employed in breast imaging. Moreover, we proceed to an experimental validation of one of the incoherent methods, i.e., the I-MUSIC, using the multimodal breast phantom we have previously developed. While in a previous paper we focused on the phantom manufacture and characterization, here we are mainly concerned with providing the detail of the reconstruction algorithm, in particular for a new multi-step clutter rejection method that was employed and only barely described. In this regard, this contribution can be considered as a completion of our previous study. The experiments against the phantom show promising results and highlight the crucial role played by the clutter rejection procedure. Full article
(This article belongs to the Special Issue Advanced Computational Methods for Oncological Image Analysis)
Show Figures

Figure 1

19 pages, 709 KiB  
Article
Enhanced Region Growing for Brain Tumor MR Image Segmentation
by Erena Siyoum Biratu, Friedhelm Schwenker, Taye Girma Debelee, Samuel Rahimeto Kebede, Worku Gachena Negera and Hasset Tamirat Molla
J. Imaging 2021, 7(2), 22; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020022 - 01 Feb 2021
Cited by 58 | Viewed by 3983
Abstract
A brain tumor is one of the foremost reasons for the rise in mortality among children and adults. A brain tumor is a mass of tissue that propagates out of control of the normal forces that regulate growth inside the brain. A brain [...] Read more.
A brain tumor is one of the foremost reasons for the rise in mortality among children and adults. A brain tumor is a mass of tissue that propagates out of control of the normal forces that regulate growth inside the brain. A brain tumor appears when one type of cell changes from its normal characteristics and grows and multiplies abnormally. The unusual growth of cells within the brain or inside the skull, which can be cancerous or non-cancerous has been the reason for the death of adults in developed countries and children in under developing countries like Ethiopia. The studies have shown that the region growing algorithm initializes the seed point either manually or semi-manually which as a result affects the segmentation result. However, in this paper, we proposed an enhanced region-growing algorithm for the automatic seed point initialization. The proposed approach’s performance was compared with the state-of-the-art deep learning algorithms using the common dataset, BRATS2015. In the proposed approach, we applied a thresholding technique to strip the skull from each input brain image. After the skull is stripped the brain image is divided into 8 blocks. Then, for each block, we computed the mean intensities and from which the five blocks with maximum mean intensities were selected out of the eight blocks. Next, the five maximum mean intensities were used as a seed point for the region growing algorithm separately and obtained five different regions of interest (ROIs) for each skull stripped input brain image. The five ROIs generated using the proposed approach were evaluated using dice similarity score (DSS), intersection over union (IoU), and accuracy (Acc) against the ground truth (GT), and the best region of interest is selected as a final ROI. Finally, the final ROI was compared with different state-of-the-art deep learning algorithms and region-based segmentation algorithms in terms of DSS. Our proposed approach was validated in three different experimental setups. In the first experimental setup where 15 randomly selected brain images were used for testing and achieved a DSS value of 0.89. In the second and third experimental setups, the proposed approach scored a DSS value of 0.90 and 0.80 for 12 randomly selected and 800 brain images respectively. The average DSS value for the three experimental setups was 0.86. Full article
(This article belongs to the Special Issue Advanced Computational Methods for Oncological Image Analysis)
Show Figures

Figure 1

21 pages, 5106 KiB  
Article
Critical Aspects of Person Counting and Density Estimation
by Roland Perko, Manfred Klopschitz, Alexander Almer and Peter M. Roth
J. Imaging 2021, 7(2), 21; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020021 - 31 Jan 2021
Cited by 8 | Viewed by 2473
Abstract
Many scientific studies deal with person counting and density estimation from single images. Recently, convolutional neural networks (CNNs) have been applied for these tasks. Even though often better results are reported, it is often not clear where the improvements are resulting from, and [...] Read more.
Many scientific studies deal with person counting and density estimation from single images. Recently, convolutional neural networks (CNNs) have been applied for these tasks. Even though often better results are reported, it is often not clear where the improvements are resulting from, and if the proposed approaches would generalize. Thus, the main goal of this paper was to identify the critical aspects of these tasks and to show how these limit state-of-the-art approaches. Based on these findings, we show how to mitigate these limitations. To this end, we implemented a CNN-based baseline approach, which we extended to deal with identified problems. These include the discovery of bias in the reference data sets, ambiguity in ground truth generation, and mismatching of evaluation metrics w.r.t. the training loss function. The experimental results show that our modifications allow for significantly outperforming the baseline in terms of the accuracy of person counts and density estimation. In this way, we get a deeper understanding of CNN-based person density estimation beyond the network architecture. Furthermore, our insights would allow to advance the field of person density estimation in general by highlighting current limitations in the evaluation protocols. Full article
Show Figures

Graphical abstract

11 pages, 1308 KiB  
Article
Improved Visual Localization via Graph Filtering
by Carlos Lassance, Yasir Latif, Ravi Garg, Vincent Gripon and Ian Reid
J. Imaging 2021, 7(2), 20; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020020 - 30 Jan 2021
Cited by 1 | Viewed by 1709
Abstract
Vision-based localization is the problem of inferring the pose of the camera given a single image. One commonly used approach relies on image retrieval where the query input is compared against a database of localized support examples and its pose is inferred with [...] Read more.
Vision-based localization is the problem of inferring the pose of the camera given a single image. One commonly used approach relies on image retrieval where the query input is compared against a database of localized support examples and its pose is inferred with the help of the retrieved items. This assumes that images taken from the same places consist of the same landmarks and thus would have similar feature representations. These representations can learn to be robust to different variations in capture conditions like time of the day or weather. In this work, we introduce a framework which aims at enhancing the performance of such retrieval-based localization methods. It consists in taking into account additional information available, such as GPS coordinates or temporal proximity in the acquisition of the images. More precisely, our method consists in constructing a graph based on this additional information that is later used to improve reliability of the retrieval process by filtering the feature representations of support and/or query images. We show that the proposed method is able to significantly improve the localization accuracy on two large scale datasets, as well as the mean average precision in classical image retrieval scenarios. Full article
(This article belongs to the Special Issue Image Retrieval in Transfer Learning)
Show Figures

Figure 1

22 pages, 548 KiB  
Review
Deep Learning for Brain Tumor Segmentation: A Survey of State-of-the-Art
by Tirivangani Magadza and Serestina Viriri
J. Imaging 2021, 7(2), 19; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020019 - 29 Jan 2021
Cited by 84 | Viewed by 8889
Abstract
Quantitative analysis of the brain tumors provides valuable information for understanding the tumor characteristics and treatment planning better. The accurate segmentation of lesions requires more than one image modalities with varying contrasts. As a result, manual segmentation, which is arguably the most accurate [...] Read more.
Quantitative analysis of the brain tumors provides valuable information for understanding the tumor characteristics and treatment planning better. The accurate segmentation of lesions requires more than one image modalities with varying contrasts. As a result, manual segmentation, which is arguably the most accurate segmentation method, would be impractical for more extensive studies. Deep learning has recently emerged as a solution for quantitative analysis due to its record-shattering performance. However, medical image analysis has its unique challenges. This paper presents a review of state-of-the-art deep learning methods for brain tumor segmentation, clearly highlighting their building blocks and various strategies. We end with a critical discussion of open challenges in medical image analysis. Full article
(This article belongs to the Special Issue Advanced Computational Methods for Oncological Image Analysis)
Show Figures

Figure 1

23 pages, 2380 KiB  
Article
A New Hybrid Inversion Method for 2D Nuclear Magnetic Resonance Combining TSVD and Tikhonov Regularization
by Germana Landi, Fabiana Zama and Villiam Bortolotti
J. Imaging 2021, 7(2), 18; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020018 - 28 Jan 2021
Cited by 2 | Viewed by 1953
Abstract
This paper is concerned with the reconstruction of relaxation time distributions in Nuclear Magnetic Resonance (NMR) relaxometry. This is a large-scale and ill-posed inverse problem with many potential applications in biology, medicine, chemistry, and other disciplines. However, the large amount of data and [...] Read more.
This paper is concerned with the reconstruction of relaxation time distributions in Nuclear Magnetic Resonance (NMR) relaxometry. This is a large-scale and ill-posed inverse problem with many potential applications in biology, medicine, chemistry, and other disciplines. However, the large amount of data and the consequently long inversion times, together with the high sensitivity of the solution to the value of the regularization parameter, still represent a major issue in the applicability of the NMR relaxometry. We present a method for two-dimensional data inversion (2DNMR) which combines Truncated Singular Value Decomposition and Tikhonov regularization in order to accelerate the inversion time and to reduce the sensitivity to the value of the regularization parameter. The Discrete Picard condition is used to jointly select the SVD truncation and Tikhonov regularization parameters. We evaluate the performance of the proposed method on both simulated and real NMR measurements. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Graphical abstract

13 pages, 2103 KiB  
Article
The Potential Use of Radiomics with Pre-Radiation Therapy MR Imaging in Predicting Risk of Pseudoprogression in Glioblastoma Patients
by Michael Baine, Justin Burr, Qian Du, Chi Zhang, Xiaoying Liang, Luke Krajewski, Laura Zima, Gerard Rux, Chi Zhang and Dandan Zheng
J. Imaging 2021, 7(2), 17; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020017 - 28 Jan 2021
Cited by 11 | Viewed by 2685
Abstract
Glioblastoma (GBM) is the most common adult glioma. Differentiating post-treatment effects such as pseudoprogression from true progression is paramount for treatment. Radiomics has been shown to predict overall survival and MGMT (methylguanine-DNA methyltransferase) promoter status in those with GBM. A potential application of [...] Read more.
Glioblastoma (GBM) is the most common adult glioma. Differentiating post-treatment effects such as pseudoprogression from true progression is paramount for treatment. Radiomics has been shown to predict overall survival and MGMT (methylguanine-DNA methyltransferase) promoter status in those with GBM. A potential application of radiomics is predicting pseudoprogression on pre-radiotherapy (RT) scans for patients with GBM. A retrospective review was performed with radiomic data analyzed using pre-RT MRI scans. Pseudoprogression was defined as post-treatment findings on imaging that resolved with steroids or spontaneously on subsequent imaging. Of the 72 patients identified for the study, 35 were able to be assessed for pseudoprogression, and 8 (22.9%) had pseudoprogression. A total of 841 radiomic features were examined along with clinical features. Receiver operating characteristic (ROC) analyses were performed to determine the AUC (area under ROC curve) of models of clinical features, radiomic features, and combining clinical and radiomic features. Two radiomic features were identified to be the optimal model combination. The ROC analysis found that the predictive ability of this combination was higher than using clinical features alone (mean AUC: 0.82 vs. 0.62). Additionally, combining the radiomic features with clinical factors did not improve predictive ability. Our results indicate that radiomics is potentially capable of predicting future development of pseudoprogression in patients with GBM using pre-RT MRIs. Full article
(This article belongs to the Special Issue Radiomics and Texture Analysis in Medical Imaging)
Show Figures

Figure 1

15 pages, 920 KiB  
Article
Testing Segmentation Popular Loss and Variations in Three Multiclass Medical Imaging Problems
by Pedro Furtado
J. Imaging 2021, 7(2), 16; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020016 - 27 Jan 2021
Cited by 11 | Viewed by 2282
Abstract
Image structures are segmented automatically using deep learning (DL) for analysis and processing. The three most popular base loss functions are cross entropy (crossE), intersect-over-the-union (IoU), and dice. Which should be used, is it useful to consider simple variations, such as modifying formula [...] Read more.
Image structures are segmented automatically using deep learning (DL) for analysis and processing. The three most popular base loss functions are cross entropy (crossE), intersect-over-the-union (IoU), and dice. Which should be used, is it useful to consider simple variations, such as modifying formula coefficients? How do characteristics of different image structures influence scores? Taking three different medical image segmentation problems (segmentation of organs in magnetic resonance images (MRI), liver in computer tomography images (CT) and diabetic retinopathy lesions in eye fundus images (EFI)), we quantify loss functions and variations, as well as segmentation scores of different targets. We first describe the limitations of metrics, since loss is a metric, then we describe and test alternatives. Experimentally, we observed that DeeplabV3 outperforms UNet and fully convolutional network (FCN) in all datasets. Dice scored 1 to 6 percentage points (pp) higher than cross entropy over all datasets, IoU improved 0 to 3 pp. Varying formula coefficients improved scores, but the best choices depend on the dataset: compared to crossE, different false positive vs. false negative weights improved MRI by 12 pp, and assigning zero weight to background improved EFI by 6 pp. Multiclass segmentation scored higher than n-uniclass segmentation in MRI by 8 pp. EFI lesions score low compared to more constant structures (e.g., optic disk or even organs), but loss modifications improve those scores significantly 6 to 9 pp. Our conclusions are that dice is best, it is worth assigning 0 weight to class background and to test different weights on false positives and false negatives. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Show Figures

Figure 1

13 pages, 10882 KiB  
Article
Hand Motion-Aware Surgical Tool Localization and Classification from an Egocentric Camera
by Tomohiro Shimizu, Ryo Hachiuma, Hiroki Kajita, Yoshifumi Takatsume and Hideo Saito
J. Imaging 2021, 7(2), 15; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020015 - 25 Jan 2021
Cited by 8 | Viewed by 2450
Abstract
Detecting surgical tools is an essential task for the analysis and evaluation of surgical videos. However, in open surgery such as plastic surgery, it is difficult to detect them because there are surgical tools with similar shapes, such as scissors and needle holders. [...] Read more.
Detecting surgical tools is an essential task for the analysis and evaluation of surgical videos. However, in open surgery such as plastic surgery, it is difficult to detect them because there are surgical tools with similar shapes, such as scissors and needle holders. Unlike endoscopic surgery, the tips of the tools are often hidden in the operating field and are not captured clearly due to low camera resolution, whereas the movements of the tools and hands can be captured. As a result that the different uses of each tool require different hand movements, it is possible to use hand movement data to classify the two types of tools. We combined three modules for localization, selection, and classification, for the detection of the two tools. In the localization module, we employed the Faster R-CNN to detect surgical tools and target hands, and in the classification module, we extracted hand movement information by combining ResNet-18 and LSTM to classify two tools. We created a dataset in which seven different types of open surgery were recorded, and we provided the annotation of surgical tool detection. Our experiments show that our approach successfully detected the two different tools and outperformed the two baseline methods. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Show Figures

Figure 1

6 pages, 223 KiB  
Editorial
Acknowledgment to Reviewers of Journal of Imaging in 2020
by Journal of Imaging Editorial Office
J. Imaging 2021, 7(2), 14; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020014 - 25 Jan 2021
Viewed by 1205
Abstract
Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Journal of Imaging maintains its standards for the high quality of its published papers [...] Full article
22 pages, 31534 KiB  
Article
Deep Concatenated Residual Networks for Improving Quality of Halftoning-Based BTC Decoded Image
by Heri Prasetyo, Alim Wicaksono Hari Prayuda, Chih-Hsien Hsia and Jing-Ming Guo
J. Imaging 2021, 7(2), 13; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020013 - 25 Jan 2021
Cited by 4 | Viewed by 1630
Abstract
This paper presents a simple technique for improving the quality of the halftoning-based block truncation coding (H-BTC) decoded image. The H-BTC is an image compression technique inspired from typical block truncation coding (BTC). The H-BTC yields a better decoded image compared to that [...] Read more.
This paper presents a simple technique for improving the quality of the halftoning-based block truncation coding (H-BTC) decoded image. The H-BTC is an image compression technique inspired from typical block truncation coding (BTC). The H-BTC yields a better decoded image compared to that of the classical BTC scheme under human visual observation. However, the impulsive noise commonly appears on the H-BTC decoded image. It induces an unpleasant feeling while one observes this decoded image. Thus, the proposed method presented in this paper aims to suppress the occurring impulsive noise by exploiting a deep learning approach. This process can be regarded as an ill-posed inverse imaging problem, in which the solution candidates of a given problem can be extremely huge and undetermined. The proposed method utilizes the convolutional neural networks (CNN) and residual learning frameworks to solve the aforementioned problem. These frameworks effectively reduce the impulsive noise occurrence, and at the same time, it improves the quality of H-BTC decoded images. The experimental results show the effectiveness of the proposed method in terms of subjective and objective measurements. Full article
(This article belongs to the Special Issue New and Specialized Methods of Image Compression)
Show Figures

Figure 1

10 pages, 3064 KiB  
Article
Olympic Games Event Recognition via Transfer Learning with Photobombing Guided Data Augmentation
by Yousef I. Mohamad, Samah S. Baraheem and Tam V. Nguyen
J. Imaging 2021, 7(2), 12; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020012 - 20 Jan 2021
Cited by 3 | Viewed by 3450
Abstract
Automatic event recognition in sports photos is both an interesting and valuable research topic in the field of computer vision and deep learning. With the rapid increase and the explosive spread of data, which is being captured momentarily, the need for fast and [...] Read more.
Automatic event recognition in sports photos is both an interesting and valuable research topic in the field of computer vision and deep learning. With the rapid increase and the explosive spread of data, which is being captured momentarily, the need for fast and precise access to the right information has become a challenging task with considerable importance for multiple practical applications, i.e., sports image and video search, sport data analysis, healthcare monitoring applications, monitoring and surveillance systems for indoor and outdoor activities, and video captioning. In this paper, we evaluate different deep learning models in recognizing and interpreting the sport events in the Olympic Games. To this end, we collect a dataset dubbed Olympic Games Event Image Dataset (OGED) including 10 different sport events scheduled for the Olympic Games Tokyo 2020. Then, the transfer learning is applied on three popular deep convolutional neural network architectures, namely, AlexNet, VGG-16 and ResNet-50 along with various data augmentation methods. Extensive experiments show that ResNet-50 with the proposed photobombing guided data augmentation achieves 90% in terms of accuracy. Full article
(This article belongs to the Special Issue Image Retrieval in Transfer Learning)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop