Next Issue
Volume 7, May
Previous Issue
Volume 7, March

J. Imaging, Volume 7, Issue 4 (April 2021) – 14 articles

Cover Story (view full-size image): Beyond well-established diagnostic imaging applications, ultrasounds are currently emerging in clinical practice as a noninvasive technology for therapy: temperature inside target solid tumors can be increased, thus leading to apoptosis/necrosis of neoplastic tissues. Patient safety during magnetic resonance-guided focused ultrasound surgery (MRgFUS) treatments was investigated by performing experiments in a tissue-mimicking phantom, as well as in ex vivo skin samples, to promptly identify unwanted temperature rises. MR images were analyzed using classical proton resonance frequency (PRF) shift and referenceless thermometry methods for estimating temperature variations. Results were compared against interferometric optical fiber measurements. Temperature increases during the treatment were not accurately detected by MR-based referenceless thermometry methods, and more sensitive [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessEditorial
Deep Learning in Medical Image Analysis
J. Imaging 2021, 7(4), 74; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040074 - 20 Apr 2021
Viewed by 254
Abstract
Over recent years, deep learning (DL) has established itself as a powerful tool across a broad spectrum of domains in imaging—e [...] Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Open AccessArticle
Iterative-Trained Semi-Blind Deconvolution Algorithm to Compensate Straylight in Retinal Images
J. Imaging 2021, 7(4), 73; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040073 - 16 Apr 2021
Viewed by 326
Abstract
The optical quality of an image depends on both the optical properties of the imaging system and the physical properties of the medium in which the light travels from the object to the final imaging sensor. The analysis of the point spread function [...] Read more.
The optical quality of an image depends on both the optical properties of the imaging system and the physical properties of the medium in which the light travels from the object to the final imaging sensor. The analysis of the point spread function of the optical system is an objective way to quantify the image degradation. In retinal imaging, the presence of corneal or cristalline lens opacifications spread the light at wide angular distributions. If the mathematical operator that degrades the image is known, the image can be restored through deconvolution methods. In the particular case of retinal imaging, this operator may be unknown (or partially) due to the presence of cataracts, corneal edema, or vitreous opacification. In those cases, blind deconvolution theory provides useful results to restore important spatial information of the image. In this work, a new semi-blind deconvolution method has been developed by training an iterative process with the Glare Spread Function kernel based on the Richardson-Lucy deconvolution algorithm to compensate a veiling glare effect in retinal images due to intraocular straylight. The method was first tested with simulated retinal images generated from a straylight eye model and applied to a real retinal image dataset composed of healthy subjects and patients with glaucoma and diabetic retinopathy. Results showed the capacity of the algorithm to detect and compensate the veiling glare degradation and improving the image sharpness up to 1000% in the case of healthy subjects and up to 700% in the pathological retinal images. This image quality improvement allows performing image segmentation processing with restored hidden spatial information after deconvolution. Full article
(This article belongs to the Special Issue Blind Image Restoration)
Show Figures

Figure 1

Open AccessArticle
Psychophysical Determination of the Relevant Colours That Describe the Colour Palette of Paintings
J. Imaging 2021, 7(4), 72; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040072 - 14 Apr 2021
Viewed by 376
Abstract
In an early study, the so-called “relevant colour” in a painting was heuristically introduced as a term to describe the number of colours that would stand out for an observer when just glancing at a painting. The purpose of this study is to [...] Read more.
In an early study, the so-called “relevant colour” in a painting was heuristically introduced as a term to describe the number of colours that would stand out for an observer when just glancing at a painting. The purpose of this study is to analyse how observers determine the relevant colours by describing observers’ subjective impressions of the most representative colours in paintings and to provide a psychophysical backing for a related computational model we proposed in a previous work. This subjective impression is elicited by an efficient and optimal processing of the most representative colour instances in painting images. Our results suggest an average number of 21 subjective colours. This number is in close agreement with the computational number of relevant colours previously obtained and allows a reliable segmentation of colour images using a small number of colours without introducing any colour categorization. In addition, our results are in good agreement with the directions of colour preferences derived from an independent component analysis. We show that independent component analysis of the painting images yields directions of colour preference aligned with the relevant colours of these images. Following on from this analysis, the results suggest that hue colour components are efficiently distributed throughout a discrete number of directions and could be relevant instances to a priori describe the most representative colours that make up the colour palette of paintings. Full article
(This article belongs to the Special Issue Advances in Color Imaging)
Show Figures

Graphical abstract

Open AccessArticle
Deeply Supervised UNet for Semantic Segmentation to Assist Dermatopathological Assessment of Basal Cell Carcinoma
J. Imaging 2021, 7(4), 71; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040071 - 13 Apr 2021
Viewed by 307
Abstract
Accurate and fast assessment of resection margins is an essential part of a dermatopathologist’s clinical routine. In this work, we successfully develop a deep learning method to assist the dermatopathologists by marking critical regions that have a high probability of exhibiting pathological features [...] Read more.
Accurate and fast assessment of resection margins is an essential part of a dermatopathologist’s clinical routine. In this work, we successfully develop a deep learning method to assist the dermatopathologists by marking critical regions that have a high probability of exhibiting pathological features in whole slide images (WSI). We focus on detecting basal cell carcinoma (BCC) through semantic segmentation using several models based on the UNet architecture. The study includes 650 WSI with 3443 tissue sections in total. Two clinical dermatopathologists annotated the data, marking tumor tissues’ exact location on 100 WSI. The rest of the data, with ground-truth sectionwise labels, are used to further validate and test the models. We analyze two different encoders for the first part of the UNet network and two additional training strategies: (a) deep supervision, (b) linear combination of decoder outputs, and obtain some interpretations about what the network’s decoder does in each case. The best model achieves over 96%, accuracy, sensitivity, and specificity on the Test set. Full article
Show Figures

Figure 1

Open AccessArticle
Progressive Secret Sharing with Adaptive Priority and Perfect Reconstruction
J. Imaging 2021, 7(4), 70; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040070 - 03 Apr 2021
Viewed by 449
Abstract
A new technique for progressive visual secret sharing (PVSS) with adaptive priority weight is proposed in this paper. This approach employs the bitwise and eXclusive-OR (XOR) based approaches for generating a set of shared images from a single secret image. It effectively overcomes [...] Read more.
A new technique for progressive visual secret sharing (PVSS) with adaptive priority weight is proposed in this paper. This approach employs the bitwise and eXclusive-OR (XOR) based approaches for generating a set of shared images from a single secret image. It effectively overcomes the former scheme limitation on dealing with an odd number of stacked or collected shared images in the recovery process. The presented technique works well when the number of stacked shared images is odd or even. As documented in experimental results, the proposed method offers good results over binary, grayscale, and color images with a perfectly reconstructed secret image. In addition, the performance of the proposed method is also supported with theoretical analysis showing its lossless ability to recover the secret image. However, it can be considered as a strong substitutive candidate for implementing a PVSS system. Full article
Show Figures

Figure 1

Open AccessReview
A Comprehensive Review of Deep-Learning-Based Methods for Image Forensics
J. Imaging 2021, 7(4), 69; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040069 - 03 Apr 2021
Viewed by 422
Abstract
Seeing is not believing anymore. Different techniques have brought to our fingertips the ability to modify an image. As the difficulty of using such techniques decreases, lowering the necessity of specialized knowledge has been the focus for companies who create and sell these [...] Read more.
Seeing is not believing anymore. Different techniques have brought to our fingertips the ability to modify an image. As the difficulty of using such techniques decreases, lowering the necessity of specialized knowledge has been the focus for companies who create and sell these tools. Furthermore, image forgeries are presently so realistic that it becomes difficult for the naked eye to differentiate between fake and real media. This can bring different problems, from misleading public opinion to the usage of doctored proof in court. For these reasons, it is important to have tools that can help us discern the truth. This paper presents a comprehensive literature review of the image forensics techniques with a special focus on deep-learning-based methods. In this review, we cover a broad range of image forensics problems including the detection of routine image manipulations, detection of intentional image falsifications, camera identification, classification of computer graphics images and detection of emerging Deepfake images. With this review it can be observed that even if image forgeries are becoming easy to create, there are several options to detect each kind of them. A review of different image databases and an overview of anti-forensic methods are also presented. Finally, we suggest some future working directions that the research community could consider to tackle in a more effective way the spread of doctored images. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

Open AccessArticle
A Fast Preprocessing Method for Micro-Expression Spotting via Perceptual Detection of Frozen Frames
J. Imaging 2021, 7(4), 68; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040068 - 02 Apr 2021
Viewed by 302
Abstract
This paper presents a preliminary study concerning a fast preprocessing method for facial microexpression (ME) spotting in video sequences. The rationale is to detect frames containing frozen expressions as a quick warning for the presence of MEs. In fact, those frames can either [...] Read more.
This paper presents a preliminary study concerning a fast preprocessing method for facial microexpression (ME) spotting in video sequences. The rationale is to detect frames containing frozen expressions as a quick warning for the presence of MEs. In fact, those frames can either precede or follow (or both) MEs according to ME type and the subject’s reaction. To that end, inspired by the Adelson–Bergen motion energy model and the instinctive nature of the preattentive vision, global visual perception-based features were employed for the detection of frozen frames. Preliminary results achieved on both controlled and uncontrolled videos confirmed that the proposed method is able to correctly detect frozen frames and those revealing the presence of nearby MEs—independently of ME kind and facial region. This property can then contribute to speeding up and simplifying the ME spotting process, especially during long video acquisitions. Full article
Show Figures

Figure 1

Open AccessArticle
Skin Lesion Segmentation Using Deep Learning with Auxiliary Task
J. Imaging 2021, 7(4), 67; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040067 - 02 Apr 2021
Cited by 1 | Viewed by 324
Abstract
Skin lesion segmentation is a primary step for skin lesion analysis, which can benefit the subsequent classification task. It is a challenging task since the boundaries of pigment regions may be fuzzy and the entire lesion may share a similar color. Prevalent deep [...] Read more.
Skin lesion segmentation is a primary step for skin lesion analysis, which can benefit the subsequent classification task. It is a challenging task since the boundaries of pigment regions may be fuzzy and the entire lesion may share a similar color. Prevalent deep learning methods for skin lesion segmentation make predictions by ensembling different convolutional neural networks (CNN), aggregating multi-scale information, or by multi-task learning framework. The main purpose of doing so is trying to make use of as much information as possible so as to make robust predictions. A multi-task learning framework has been proved to be beneficial for the skin lesion segmentation task, which is usually incorporated with the skin lesion classification task. However, multi-task learning requires extra labeling information which may not be available for the skin lesion images. In this paper, a novel CNN architecture using auxiliary information is proposed. Edge prediction, as an auxiliary task, is performed simultaneously with the segmentation task. A cross-connection layer module is proposed, where the intermediate feature maps of each task are fed into the subblocks of the other task which can implicitly guide the neural network to focus on the boundary region of the segmentation task. In addition, a multi-scale feature aggregation module is proposed, which makes use of features of different scales and enhances the performance of the proposed method. Experimental results show that the proposed method obtains a better performance compared with the state-of-the-art methods with a Jaccard Index (JA) of 79.46, Accuracy (ACC) of 94.32, SEN of 88.76 with only one integrated model, which can be learned in an end-to-end manner. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Journal of Imaging Editorial Board Members)
Show Figures

Figure 1

Open AccessReview
Transfer Learning in Magnetic Resonance Brain Imaging: A Systematic Review
J. Imaging 2021, 7(4), 66; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040066 - 01 Apr 2021
Viewed by 415
Abstract
(1) Background: Transfer learning refers to machine learning techniques that focus on acquiring knowledge from related tasks to improve generalization in the tasks of interest. In magnetic resonance imaging (MRI), transfer learning is important for developing strategies that address the variation in MR [...] Read more.
(1) Background: Transfer learning refers to machine learning techniques that focus on acquiring knowledge from related tasks to improve generalization in the tasks of interest. In magnetic resonance imaging (MRI), transfer learning is important for developing strategies that address the variation in MR images from different imaging protocols or scanners. Additionally, transfer learning is beneficial for reutilizing machine learning models that were trained to solve different (but related) tasks to the task of interest. The aim of this review is to identify research directions, gaps in knowledge, applications, and widely used strategies among the transfer learning approaches applied in MR brain imaging; (2) Methods: We performed a systematic literature search for articles that applied transfer learning to MR brain imaging tasks. We screened 433 studies for their relevance, and we categorized and extracted relevant information, including task type, application, availability of labels, and machine learning methods. Furthermore, we closely examined brain MRI-specific transfer learning approaches and other methods that tackled issues relevant to medical imaging, including privacy, unseen target domains, and unlabeled data; (3) Results: We found 129 articles that applied transfer learning to MR brain imaging tasks. The most frequent applications were dementia-related classification tasks and brain tumor segmentation. The majority of articles utilized transfer learning techniques based on convolutional neural networks (CNNs). Only a few approaches utilized clearly brain MRI-specific methodology, and considered privacy issues, unseen target domains, or unlabeled data. We proposed a new categorization to group specific, widely-used approaches such as pretraining and fine-tuning CNNs; (4) Discussion: There is increasing interest in transfer learning for brain MRI. Well-known public datasets have clearly contributed to the popularity of Alzheimer’s diagnostics/prognostics and tumor segmentation as applications. Likewise, the availability of pretrained CNNs has promoted their utilization. Finally, the majority of the surveyed studies did not examine in detail the interpretation of their strategies after applying transfer learning, and did not compare their approach with other transfer learning approaches. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Journal of Imaging Editorial Board Members)
Show Figures

Figure 1

Open AccessArticle
A Comparative Analysis for 2D Object Recognition: A Case Study with Tactode Puzzle-Like Tiles
J. Imaging 2021, 7(4), 65; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040065 - 01 Apr 2021
Viewed by 335
Abstract
Object recognition represents the ability of a system to identify objects, humans or animals in images. Within this domain, this work presents a comparative analysis among different classification methods aiming at Tactode tile recognition. The covered methods include: (i) machine learning with HOG [...] Read more.
Object recognition represents the ability of a system to identify objects, humans or animals in images. Within this domain, this work presents a comparative analysis among different classification methods aiming at Tactode tile recognition. The covered methods include: (i) machine learning with HOG and SVM; (ii) deep learning with CNNs such as VGG16, VGG19, ResNet152, MobileNetV2, SSD and YOLOv4; (iii) matching of handcrafted features with SIFT, SURF, BRISK and ORB; and (iv) template matching. A dataset was created to train learning-based methods (i and ii), and with respect to the other methods (iii and iv), a template dataset was used. To evaluate the performance of the recognition methods, two test datasets were built: tactode_small and tactode_big, which consisted of 288 and 12,000 images, holding 2784 and 96,000 regions of interest for classification, respectively. SSD and YOLOv4 were the worst methods for their domain, whereas ResNet152 and MobileNetV2 showed that they were strong recognition methods. SURF, ORB and BRISK demonstrated great recognition performance, while SIFT was the worst of this type of method. The methods based on template matching attained reasonable recognition results, falling behind most other methods. The top three methods of this study were: VGG16 with an accuracy of 99.96% and 99.95% for tactode_small and tactode_big, respectively; VGG19 with an accuracy of 99.96% and 99.68% for the same datasets; and HOG and SVM, which reached an accuracy of 99.93% for tactode_small and 99.86% for tactode_big, while at the same time presenting average execution times of 0.323 s and 0.232 s on the respective datasets, being the fastest method overall. This work demonstrated that VGG16 was the best choice for this case study, since it minimised the misclassifications for both test datasets. Full article
Show Figures

Figure 1

Open AccessArticle
Investigating the Potential of Network Optimization for a Constrained Object Detection Problem
J. Imaging 2021, 7(4), 64; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040064 - 01 Apr 2021
Viewed by 257
Abstract
Object detection models are usually trained and evaluated on highly complicated, challenging academic datasets, which results in deep networks requiring lots of computations. However, a lot of operational use-cases consist of more constrained situations: they have a limited number of classes to be [...] Read more.
Object detection models are usually trained and evaluated on highly complicated, challenging academic datasets, which results in deep networks requiring lots of computations. However, a lot of operational use-cases consist of more constrained situations: they have a limited number of classes to be detected, less intra-class variance, less lighting and background variance, constrained or even fixed camera viewpoints, etc. In these cases, we hypothesize that smaller networks could be used without deteriorating the accuracy. However, there are multiple reasons why this does not happen in practice. Firstly, overparameterized networks tend to learn better, and secondly, transfer learning is usually used to reduce the necessary amount of training data. In this paper, we investigate how much we can reduce the computational complexity of a standard object detection network in such constrained object detection problems. As a case study, we focus on a well-known single-shot object detector, YoloV2, and combine three different techniques to reduce the computational complexity of the model without reducing its accuracy on our target dataset. To investigate the influence of the problem complexity, we compare two datasets: a prototypical academic (Pascal VOC) and a real-life operational (LWIR person detection) dataset. The three optimization steps we exploited are: swapping all the convolutions for depth-wise separable convolutions, perform pruning and use weight quantization. The results of our case study indeed substantiate our hypothesis that the more constrained a problem is, the more the network can be optimized. On the constrained operational dataset, combining these optimization techniques allowed us to reduce the computational complexity with a factor of 349, as compared to only a factor 9.8 on the academic dataset. When running a benchmark on an Nvidia Jetson AGX Xavier, our fastest model runs more than 15 times faster than the original YoloV2 model, whilst increasing the accuracy by 5% Average Precision (AP). Full article
(This article belongs to the Special Issue 2020 Selected Papers from Journal of Imaging Editorial Board Members)
Show Figures

Figure 1

Open AccessArticle
A Computational Study on Temperature Variations in MRgFUS Treatments Using PRF Thermometry Techniques and Optical Probes
J. Imaging 2021, 7(4), 63; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040063 - 25 Mar 2021
Viewed by 421
Abstract
Structural and metabolic imaging are fundamental for diagnosis, treatment and follow-up in oncology. Beyond the well-established diagnostic imaging applications, ultrasounds are currently emerging in the clinical practice as a noninvasive technology for therapy. Indeed, the sound waves can be used to increase the [...] Read more.
Structural and metabolic imaging are fundamental for diagnosis, treatment and follow-up in oncology. Beyond the well-established diagnostic imaging applications, ultrasounds are currently emerging in the clinical practice as a noninvasive technology for therapy. Indeed, the sound waves can be used to increase the temperature inside the target solid tumors, leading to apoptosis or necrosis of neoplastic tissues. The Magnetic resonance-guided focused ultrasound surgery (MRgFUS) technology represents a valid application of this ultrasound property, mainly used in oncology and neurology. In this paper; patient safety during MRgFUS treatments was investigated by a series of experiments in a tissue-mimicking phantom and performing ex vivo skin samples, to promptly identify unwanted temperature rises. The acquired MR images, used to evaluate the temperature in the treated areas, were analyzed to compare classical proton resonance frequency (PRF) shift techniques and referenceless thermometry methods to accurately assess the temperature variations. We exploited radial basis function (RBF) neural networks for referenceless thermometry and compared the results against interferometric optical fiber measurements. The experimental measurements were obtained using a set of interferometric optical fibers aimed at quantifying temperature variations directly in the sonication areas. The temperature increases during the treatment were not accurately detected by MRI-based referenceless thermometry methods, and more sensitive measurement systems, such as optical fibers, would be required. In-depth studies about these aspects are needed to monitor temperature and improve safety during MRgFUS treatments. Full article
(This article belongs to the Special Issue Advanced Computational Methods for Oncological Image Analysis)
Show Figures

Figure 1

Open AccessArticle
A GAN-Based Self-Training Framework for Unsupervised Domain Adaptive Person Re-Identification
J. Imaging 2021, 7(4), 62; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040062 - 25 Mar 2021
Viewed by 443
Abstract
As a crucial task in surveillance and security, person re-identification (re-ID) aims to identify the targeted pedestrians across multiple images captured by non-overlapping cameras. However, existing person re-ID solutions have two main challenges: the lack of pedestrian identification labels in the captured images, [...] Read more.
As a crucial task in surveillance and security, person re-identification (re-ID) aims to identify the targeted pedestrians across multiple images captured by non-overlapping cameras. However, existing person re-ID solutions have two main challenges: the lack of pedestrian identification labels in the captured images, and domain shift issue between different domains. A generative adversarial networks (GAN)-based self-training framework with progressive augmentation (SPA) is proposed to obtain the robust features of the unlabeled data from the target domain, according to the preknowledge of the labeled data from the source domain. Specifically, the proposed framework consists of two stages: the style transfer stage (STrans), and self-training stage (STrain). First, the targeted data is complemented by a camera style transfer algorithm in the STrans stage, in which CycleGAN and Siamese Network are integrated to preserve the unsupervised self-similarity (the similarity of the same image between before and after transformation) and domain dissimilarity (the dissimilarity between a transferred source image and the targeted image). Second, clustering and classification are alternately applied to enhance the model performance progressively in the STrain stage, in which both global and local features of the target-domain images are obtained. Compared with the state-of-the-art methods, the proposed method achieves the competitive accuracy on two existing datasets. Full article
Show Figures

Figure 1

Open AccessArticle
Time- and Resource-Efficient Time-to-Collision Forecasting for Indoor Pedestrian Obstacles Avoidance
J. Imaging 2021, 7(4), 61; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040061 - 25 Mar 2021
Viewed by 398
Abstract
As difficult vision-based tasks like object detection and monocular depth estimation are making their way in real-time applications and as more light weighted solutions for autonomous vehicles navigation systems are emerging, obstacle detection and collision prediction are two very challenging tasks for small [...] Read more.
As difficult vision-based tasks like object detection and monocular depth estimation are making their way in real-time applications and as more light weighted solutions for autonomous vehicles navigation systems are emerging, obstacle detection and collision prediction are two very challenging tasks for small embedded devices like drones. We propose a novel light weighted and time-efficient vision-based solution to predict Time-to-Collision from a monocular video camera embedded in a smartglasses device as a module of a navigation system for visually impaired pedestrians. It consists of two modules: a static data extractor made of a convolutional neural network to predict the obstacle position and distance and a dynamic data extractor that stacks the obstacle data from multiple frames and predicts the Time-to-Collision with a simple fully connected neural network. This paper focuses on the Time-to-Collision network’s ability to adapt to new sceneries with different types of obstacles with supervised learning. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Journal of Imaging Editorial Board Members)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop