Next Issue
Volume 8, June
Previous Issue
Volume 8, April
 
 

J. Imaging, Volume 8, Issue 5 (May 2022) – 32 articles

Cover Story (view full-size image): Tumor segmentation requires a highly trained physician. To reduce human intervention, we propose a full-body 3D positron emission tomography (PET) image with two 2D projections obtained through Maximum Intensity Projections (MIPs). The two projections are then input to two 2D convolutional neural networks (CNNs) trained to classify lung vs. esophageal cancer. A weighted class activation map (CAM) is obtained for each projection, and the intersection of the two 2D orthogonal CAMs serves to detect the 3D region around the tumor. To refine the segmentation, we add a geometric loss based on prior knowledge penalizing the distance between the CAMs and a seed point provided by the user. Finally, the 3D segmentation is fed to a 3D CNN to predict the patient outcome. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
11 pages, 1212 KiB  
Article
Deep Neural Network for Cardiac Magnetic Resonance Image Segmentation
by David Chen, Huzefa Bhopalwala, Nakeya Dewaswala, Shivaram P. Arunachalam, Moein Enayati, Nasibeh Zanjirani Farahani, Kalyan Pasupathy, Sravani Lokineni, J. Martijn Bos, Peter A. Noseworthy, Reza Arsanjani, Bradley J. Erickson, Jeffrey B. Geske, Michael J. Ackerman, Philip A. Araoz and Adelaide M. Arruda-Olson
J. Imaging 2022, 8(5), 149; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050149 - 23 May 2022
Cited by 6 | Viewed by 3543
Abstract
The analysis and interpretation of cardiac magnetic resonance (CMR) images are often time-consuming. The automated segmentation of cardiac structures can reduce the time required for image analysis. Spatial similarities between different CMR image types were leveraged to jointly segment multiple sequences using a [...] Read more.
The analysis and interpretation of cardiac magnetic resonance (CMR) images are often time-consuming. The automated segmentation of cardiac structures can reduce the time required for image analysis. Spatial similarities between different CMR image types were leveraged to jointly segment multiple sequences using a segmentation model termed a multi-image type UNet (MI-UNet). This model was developed from 72 exams (46% female, mean age 63 ± 11 years) performed on patients with hypertrophic cardiomyopathy. The MI-UNet for steady-state free precession (SSFP) images achieved a superior Dice similarity coefficient (DSC) of 0.92 ± 0.06 compared to 0.87 ± 0.08 for a single-image type UNet (p < 0.001). The MI-UNet for late gadolinium enhancement (LGE) images also had a superior DSC of 0.86 ± 0.11 compared to 0.78 ± 0.11 for a single-image type UNet (p = 0.001). The difference across image types was most evident for the left ventricular myocardium in SSFP images and for both the left ventricular cavity and the left ventricular myocardium in LGE images. For the right ventricle, there were no differences in DCS when comparing the MI-UNet with single-image type UNets. The joint segmentation of multiple image types increases segmentation accuracy for CMR images of the left ventricle compared to single-image models. In clinical practice, the MI-UNet model may expedite the analysis and interpretation of CMR images of multiple types. Full article
(This article belongs to the Topic Artificial Intelligence (AI) in Medical Imaging)
Show Figures

Figure 1

24 pages, 7195 KiB  
Article
Three-Dimensional Finger Vein Recognition: A Novel Mirror-Based Imaging Device
by Christof Kauba, Martin Drahanský, Marie Nováková, Andreas Uhl and Štěpán Rydlo
J. Imaging 2022, 8(5), 148; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050148 - 23 May 2022
Cited by 5 | Viewed by 4247
Abstract
Finger vein recognition has evolved into a major biometric trait in recent years. Despite various improvements in recognition accuracy and usability, finger vein recognition is still far from being perfect as it suffers from low-contrast images and other imaging artefacts. Three-dimensional or multi-perspective [...] Read more.
Finger vein recognition has evolved into a major biometric trait in recent years. Despite various improvements in recognition accuracy and usability, finger vein recognition is still far from being perfect as it suffers from low-contrast images and other imaging artefacts. Three-dimensional or multi-perspective finger vein recognition technology provides a way to tackle some of the current problems, especially finger misplacement and rotations. In this work we present a novel multi-perspective finger vein capturing device that is based on mirrors, in contrast to most of the existing devices, which are usually based on multiple cameras. This new device only uses a single camera, a single illumination module and several mirrors to capture the finger at different rotational angles. To derive the need for this new device, we at first summarise the state of the art in multi-perspective finger vein recognition and identify the potential problems and shortcomings of the current devices. Full article
(This article belongs to the Section Biometrics, Forensics, and Security)
Show Figures

Figure 1

15 pages, 4323 KiB  
Article
Scanning X-ray Fluorescence Data Analysis for the Identification of Byzantine Icons’ Materials, Techniques, and State of Preservation: A Case Study
by Theofanis Gerodimos, Anastasios Asvestas, Georgios P. Mastrotheodoros, Giannis Chantas, Ioannis Liougos, Aristidis Likas and Dimitrios F. Anagnostopoulos
J. Imaging 2022, 8(5), 147; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050147 - 23 May 2022
Cited by 5 | Viewed by 2898
Abstract
X-ray fluorescence (XRF) spectrometry has proven to be a core, non-destructive, analytical technique in cultural heritage studies mainly because of its non-invasive character and ability to rapidly reveal the elemental composition of the analyzed artifacts. Being able to penetrate deeper into matter than [...] Read more.
X-ray fluorescence (XRF) spectrometry has proven to be a core, non-destructive, analytical technique in cultural heritage studies mainly because of its non-invasive character and ability to rapidly reveal the elemental composition of the analyzed artifacts. Being able to penetrate deeper into matter than the visible light, X-rays allow further analysis that may eventually lead to the extraction of information that pertains to the substrate(s) of an artifact. The recently developed scanning macroscopic X-ray fluorescence method (MA-XRF) allows for the extraction of elemental distribution images. The present work aimed at comparing two different analysis methods for interpreting the large number of XRF spectra collected in the framework of MA-XRF analysis. The measured spectra were analyzed in two ways: a merely spectroscopic approach and an exploratory data analysis approach. The potentialities of the applied methods are showcased on a notable 18th-century Greek religious panel painting. The spectroscopic approach separately analyses each one of the measured spectra and leads to the construction of single-element spatial distribution images (element maps). The statistical data analysis approach leads to the grouping of all spectra into distinct clusters with common features, while afterward dimensionality reduction algorithms help reduce thousands of channels of XRF spectra in an easily perceived dataset of two-dimensional images. The two analytical approaches allow extracting detailed information about the pigments used and paint layer stratigraphy (i.e., painting technique) as well as restoration interventions/state of preservation. Full article
(This article belongs to the Special Issue Spectral Imaging for Cultural Heritage)
Show Figures

Figure 1

11 pages, 5161 KiB  
Article
Dental MRI of Oral Soft-Tissue Tumors—Optimized Use of Black Bone MRI Sequences and a 15-Channel Mandibular Coil
by Adib Al-Haj Husain, Esra Sekerci, Daphne Schönegg, Fabienne A. Bosshard, Bernd Stadlinger, Sebastian Winklhofer, Marco Piccirelli and Silvio Valdec
J. Imaging 2022, 8(5), 146; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050146 - 22 May 2022
Cited by 7 | Viewed by 3338
Abstract
Soft-tissue lesions in the oral cavity, one of the most common sites for tumors and tumor-like lesions, can be challenging to diagnose and treat due to the wide spectrum from benign indolent to invasive malignant lesions. We report an abnormally large, rapidly growing [...] Read more.
Soft-tissue lesions in the oral cavity, one of the most common sites for tumors and tumor-like lesions, can be challenging to diagnose and treat due to the wide spectrum from benign indolent to invasive malignant lesions. We report an abnormally large, rapidly growing hyperplastic lesion originating from the buccal mucosa in a 28-year-old male patient. Clinical examination revealed a well-circumscribed, smooth-surfaced, pinkish nodular lesion measuring 2.3 × 2 cm, which suggested the differential diagnosis of irritation fibroma, pyogenic granuloma, oral lipoma, and other benign or malignant neoplasms such as hemangioma, non-Hodgkin’s lymphoma, or metastases to the oral cavity. Dental MRI using a 15-channel mandibular coil was performed to improve perioperative radiological and surgical management, avoiding adverse intraoperative events and misdiagnosis of vascular malformations, especially hemangiomas. Black bone MRI protocols such as STIR (short-tau inversion recovery) and DESS (double-echo steady-state) were used for high-resolution radiation-free imaging. Radiologic findings supported the suspected diagnosis of an irritation fibroma and ruled out any further head and neck lesions, therefore complete surgical resection was performed. Histology confirmed the tentative diagnosis. This article evaluates the use of this novel technique for MR diagnosis in the perioperative management of soft-tissue tumors in oral and maxillofacial surgery. Full article
(This article belongs to the Special Issue New Frontiers of Advanced Imaging in Dentistry)
Show Figures

Figure 1

20 pages, 721 KiB  
Review
What Is Significant in Modern Augmented Reality: A Systematic Analysis of Existing Reviews
by Athanasios Nikolaidis
J. Imaging 2022, 8(5), 145; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050145 - 21 May 2022
Cited by 5 | Viewed by 2303
Abstract
Augmented reality (AR) is a field of technology that has evolved drastically during the last decades, due to its vast range of applications in everyday life. The aim of this paper is to provide researchers with an overview of what has been surveyed [...] Read more.
Augmented reality (AR) is a field of technology that has evolved drastically during the last decades, due to its vast range of applications in everyday life. The aim of this paper is to provide researchers with an overview of what has been surveyed since 2010 in terms of AR application areas as well as in terms of its technical aspects, and to discuss the extent to which both application areas and technical aspects have been covered, as well as to examine whether one can extract useful evidence of what aspects have not been covered adequately and whether it is possible to define common taxonomy criteria for performing AR reviews in the future. To this end, a search with inclusion and exclusion criteria has been performed in the Scopus database, producing a representative set of 47 reviews, covering the years from 2010 onwards. A proper taxonomy of the results is introduced, and the findings reveal, among others, the lack of AR application reviews covering all suggested criteria. Full article
(This article belongs to the Special Issue Advanced Scene Perception for Augmented Reality)
Show Figures

Figure 1

15 pages, 7514 KiB  
Article
Generation of Ince–Gaussian Beams Using Azocarbazole Polymer CGH
by Sumit Kumar Singh, Honoka Haginaka, Boaz Jessie Jackin, Kenji Kinashi, Naoto Tsutsumi and Wataru Sakai
J. Imaging 2022, 8(5), 144; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050144 - 21 May 2022
Cited by 8 | Viewed by 2965
Abstract
Ince–Gaussian beams, defined as a solution to a wave equation in elliptical coordinates, have shown great advantages in applications such as optical communication, optical trapping and optical computation. However, to ingress these applications, a compact and scalable method for generating these beams is [...] Read more.
Ince–Gaussian beams, defined as a solution to a wave equation in elliptical coordinates, have shown great advantages in applications such as optical communication, optical trapping and optical computation. However, to ingress these applications, a compact and scalable method for generating these beams is required. Here, we present a simple method that satisfies the above requirement, and is capable of generating arbitrary Ince–Gaussian beams and their superposed states through a computer-generated hologram of size 1 mm2, fabricated on an azocarbazole polymer film. Other structural beams that can be derived from the Ince–Gaussian beam were also successfully generated by changing the elliptical parameters of the Ince–Gaussian beam. The orthogonality relations between different Ince–Gaussian modes were investigated in order to verify applicability in an optical communication regime. The complete python source code for computing the Ince–Gaussian beams and their holograms are also provided. Full article
(This article belongs to the Special Issue Digital Holography: Development and Application)
Show Figures

Figure 1

22 pages, 5554 KiB  
Article
Digital Hebrew Paleography: Script Types and Modes
by Ahmad Droby, Irina Rabaev, Daria Vasyutinsky Shapira, Berat Kurar Barakat and Jihad El-Sana
J. Imaging 2022, 8(5), 143; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050143 - 21 May 2022
Cited by 3 | Viewed by 3358
Abstract
Paleography is the study of ancient and medieval handwriting. It is essential for understanding, authenticating, and dating historical texts. Across many archives and libraries, many handwritten manuscripts are yet to be classified. Human experts can process a limited number of manuscripts; therefore, there [...] Read more.
Paleography is the study of ancient and medieval handwriting. It is essential for understanding, authenticating, and dating historical texts. Across many archives and libraries, many handwritten manuscripts are yet to be classified. Human experts can process a limited number of manuscripts; therefore, there is a need for an automatic tool for script type classification. In this study, we utilize a deep-learning methodology to classify medieval Hebrew manuscripts into 14 classes based on their script style and mode. Hebrew paleography recognizes six regional styles and three graphical modes of scripts. We experiment with several input image representations and network architectures to determine the appropriate ones and explore several approaches for script classification. We obtained the highest accuracy using hierarchical classification approach. At the first level, the regional style of the script is classified. Then, the patch is passed to the corresponding model at the second level to determine the graphical mode. In addition, we explore the use of soft labels to define a value we call squareness value that indicates the squareness/cursiveness of the script. We show how the graphical mode labels can be redefined using the squareness value. This redefinition increases the classification accuracy significantly. Finally, we show that the automatic classification is on-par with a human expert paleographer. Full article
Show Figures

Figure 1

17 pages, 6390 KiB  
Article
upU-Net Approaches for Background Emission Removal in Fluorescence Microscopy
by Alessandro Benfenati
J. Imaging 2022, 8(5), 142; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050142 - 20 May 2022
Cited by 3 | Viewed by 2572
Abstract
The physical process underlying microscopy imaging suffers from several issues: some of them include the blurring effect due to the Point Spread Function, the presence of Gaussian or Poisson noise, or even a mixture of these two types of perturbation. Among them, auto–fluorescence [...] Read more.
The physical process underlying microscopy imaging suffers from several issues: some of them include the blurring effect due to the Point Spread Function, the presence of Gaussian or Poisson noise, or even a mixture of these two types of perturbation. Among them, auto–fluorescence presents other artifacts in the registered image, and such fluorescence may be an important obstacle in correctly recognizing objects and organisms in the image. For example, particle tracking may suffer from the presence of this kind of perturbation. The objective of this work is to employ Deep Learning techniques, in the form of U-Nets like architectures, for background emission removal. Such fluorescence is modeled by Perlin noise, which reveals to be a suitable candidate for simulating such a phenomenon. The proposed architecture succeeds in removing the fluorescence, and at the same time, it acts as a denoiser for both Gaussian and Poisson noise. The performance of this approach is furthermore assessed on actual microscopy images and by employing the restored images for particle recognition. Full article
(This article belongs to the Special Issue Fluorescence Imaging and Analysis of Cellular System)
Show Figures

Figure 1

22 pages, 28214 KiB  
Review
Image Augmentation Techniques for Mammogram Analysis
by Parita Oza, Paawan Sharma, Samir Patel, Festus Adedoyin and Alessandro Bruno
J. Imaging 2022, 8(5), 141; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050141 - 20 May 2022
Cited by 41 | Viewed by 5815
Abstract
Research in the medical imaging field using deep learning approaches has become progressively contingent. Scientific findings reveal that supervised deep learning methods’ performance heavily depends on training set size, which expert radiologists must manually annotate. The latter is quite a tiring and time-consuming [...] Read more.
Research in the medical imaging field using deep learning approaches has become progressively contingent. Scientific findings reveal that supervised deep learning methods’ performance heavily depends on training set size, which expert radiologists must manually annotate. The latter is quite a tiring and time-consuming task. Therefore, most of the freely accessible biomedical image datasets are small-sized. Furthermore, it is challenging to have big-sized medical image datasets due to privacy and legal issues. Consequently, not a small number of supervised deep learning models are prone to overfitting and cannot produce generalized output. One of the most popular methods to mitigate the issue above goes under the name of data augmentation. This technique helps increase training set size by utilizing various transformations and has been publicized to improve the model performance when tested on new data. This article surveyed different data augmentation techniques employed on mammogram images. The article aims to provide insights into basic and deep learning-based augmentation techniques. Full article
Show Figures

Figure 1

12 pages, 2015 KiB  
Article
Comparison of Ultrasound Image Classifier Deep Learning Algorithms for Shrapnel Detection
by Emily N. Boice, Sofia I. Hernandez-Torres and Eric J. Snider
J. Imaging 2022, 8(5), 140; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050140 - 20 May 2022
Cited by 8 | Viewed by 2299
Abstract
Ultrasound imaging is essential in emergency medicine and combat casualty care, oftentimes used as a critical triage tool. However, identifying injuries, such as shrapnel embedded in tissue or a pneumothorax, can be challenging without extensive ultrasonography training, which may not be available in [...] Read more.
Ultrasound imaging is essential in emergency medicine and combat casualty care, oftentimes used as a critical triage tool. However, identifying injuries, such as shrapnel embedded in tissue or a pneumothorax, can be challenging without extensive ultrasonography training, which may not be available in prolonged field care or emergency medicine scenarios. Artificial intelligence can simplify this by automating image interpretation but only if it can be deployed for use in real time. We previously developed a deep learning neural network model specifically designed to identify shrapnel in ultrasound images, termed ShrapML. Here, we expand on that work to further optimize the model and compare its performance to that of conventional models trained on the ImageNet database, such as ResNet50. Through Bayesian optimization, the model’s parameters were further refined, resulting in an F1 score of 0.98. We compared the proposed model to four conventional models: DarkNet-19, GoogleNet, MobileNetv2, and SqueezeNet which were down-selected based on speed and testing accuracy. Although MobileNetv2 achieved a higher accuracy than ShrapML, there was a tradeoff between accuracy and speed, with ShrapML being 10× faster than MobileNetv2. In conclusion, real-time deployment of algorithms such as ShrapML can reduce the cognitive load for medical providers in high-stress emergency or miliary medicine scenarios. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis, Volume II)
Show Figures

Figure 1

19 pages, 7727 KiB  
Article
Intraretinal Layer Segmentation Using Cascaded Compressed U-Nets
by Sunil Kumar Yadav, Rahele Kafieh, Hanna Gwendolyn Zimmermann, Josef Kauer-Bonin, Kouros Nouri-Mahdavi, Vahid Mohammadzadeh, Lynn Shi, Ella Maria Kadas, Friedemann Paul, Seyedamirhosein Motamedi and Alexander Ulrich Brandt
J. Imaging 2022, 8(5), 139; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050139 - 17 May 2022
Cited by 6 | Viewed by 2794
Abstract
Reliable biomarkers quantifying neurodegeneration and neuroinflammation in central nervous system disorders such as Multiple Sclerosis, Alzheimer’s dementia or Parkinson’s disease are an unmet clinical need. Intraretinal layer thicknesses on macular optical coherence tomography (OCT) images are promising noninvasive biomarkers querying neuroretinal structures with [...] Read more.
Reliable biomarkers quantifying neurodegeneration and neuroinflammation in central nervous system disorders such as Multiple Sclerosis, Alzheimer’s dementia or Parkinson’s disease are an unmet clinical need. Intraretinal layer thicknesses on macular optical coherence tomography (OCT) images are promising noninvasive biomarkers querying neuroretinal structures with near cellular resolution. However, changes are typically subtle, while tissue gradients can be weak, making intraretinal segmentation a challenging task. A robust and efficient method that requires no or minimal manual correction is an unmet need to foster reliable and reproducible research as well as clinical application. Here, we propose and validate a cascaded two-stage network for intraretinal layer segmentation, with both networks being compressed versions of U-Net (CCU-INSEG). The first network is responsible for retinal tissue segmentation from OCT B-scans. The second network segments eight intraretinal layers with high fidelity. At the post-processing stage, we introduce Laplacian-based outlier detection with layer surface hole filling by adaptive non-linear interpolation. Additionally, we propose a weighted version of focal loss to minimize the foreground–background pixel imbalance in the training data. We train our method using 17,458 B-scans from patients with autoimmune optic neuropathies, i.e., multiple sclerosis, and healthy controls. Voxel-wise comparison against manual segmentation produces a mean absolute error of 2.3 μm, outperforming current state-of-the-art methods on the same data set. Voxel-wise comparison against external glaucoma data leads to a mean absolute error of 2.6 μm when using the same gold standard segmentation approach, and 3.7 μm mean absolute error in an externally segmented data set. In scans from patients with severe optic atrophy, 3.5% of B-scan segmentation results were rejected by an experienced grader, whereas this was the case in 41.4% of B-scans segmented with a graph-based reference method. The validation results suggest that the proposed method can robustly segment macular scans from eyes with even severe neuroretinal changes. Full article
(This article belongs to the Special Issue Frontiers in Retinal Image Processing)
Show Figures

Figure 1

13 pages, 7980 KiB  
Article
A Generic Framework for Depth Reconstruction Enhancement
by Hendrik Sommerhoff and Andreas Kolb
J. Imaging 2022, 8(5), 138; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050138 - 16 May 2022
Viewed by 1995
Abstract
We propose a generic depth-refinement scheme based on GeoNet, a recent deep-learning approach for predicting depth and normals from a single color image, and extend it to be applied to any depth reconstruction task such as super resolution, denoising and deblurring, as long [...] Read more.
We propose a generic depth-refinement scheme based on GeoNet, a recent deep-learning approach for predicting depth and normals from a single color image, and extend it to be applied to any depth reconstruction task such as super resolution, denoising and deblurring, as long as the task includes a depth output. Our approach utilizes a tight coupling of the inherent geometric relationship between depth and normal maps to guide a neural network. In contrast to GeoNet, we do not utilize the original input information to the backbone reconstruction task, which leads to a generic application of our network structure. Our approach first learns a high-quality normal map from the depth image generated by the backbone method and then uses this normal map to refine the initial depth image jointly with the learned normal map. This is motivated by the fact that it is hard for neural networks to learn direct mapping between depth and normal maps without explicit geometric constraints. We show the efficiency of our method on the exemplary inverse depth-image reconstruction tasks of denoising, super resolution and removal of motion blur. Full article
(This article belongs to the Special Issue Computer Vision and Deep Learning: Trends and Applications)
Show Figures

Figure 1

15 pages, 3357 KiB  
Article
Development of a Visualisation Approach for Analysing Incipient and Clinically Unrecorded Enamel Fissure Caries Using Laser-Induced Contrast Imaging, MicroRaman Spectroscopy and Biomimetic Composites: A Pilot Study
by Pavel Seredin, Dmitry Goloshchapov, Vladimir Kashkarov, Anna Emelyanova, Nikita Buylov, Yuri Ippolitov and Tatiana Prutskij
J. Imaging 2022, 8(5), 137; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050137 - 13 May 2022
Cited by 3 | Viewed by 1833
Abstract
This pilot study presents a practical approach to detecting and visualising the initial forms of caries that are not clinically registered. The use of a laser-induced contrast visualisation (LICV) technique was shown to provide detection of the originating caries based on the separation [...] Read more.
This pilot study presents a practical approach to detecting and visualising the initial forms of caries that are not clinically registered. The use of a laser-induced contrast visualisation (LICV) technique was shown to provide detection of the originating caries based on the separation of emissions from sound tissue, areas with destroyed tissue and regions of bacterial invasion. Adding microRaman spectroscopy to the measuring system enables reliable detection of the transformation of the organic–mineral component in the dental tissue and the spread of bacterial microflora in the affected region. Further laboratory and clinical studies of the comprehensive use of LICV and microRaman spectroscopy enable data extension on the application of this approach for accurate determination of the boundaries in the changed dental tissue as a result of initial caries. The obtained data has the potential to develop an effective preventive medical diagnostic approach and as a result, further personalised medical treatment can be specified. Full article
(This article belongs to the Special Issue New Frontiers of Advanced Imaging in Dentistry)
Show Figures

Figure 1

18 pages, 2161 KiB  
Article
Data Extraction of Circular-Shaped and Grid-like Chart Images
by Filip Bajić and Josip Job
J. Imaging 2022, 8(5), 136; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050136 - 12 May 2022
Cited by 3 | Viewed by 2342
Abstract
Chart data extraction is a crucial research field in recovering information from chart images. With the recent rise in image processing and computer vision algorithms, researchers presented various approaches to tackle this problem. Nevertheless, most of them use different datasets, often not publicly [...] Read more.
Chart data extraction is a crucial research field in recovering information from chart images. With the recent rise in image processing and computer vision algorithms, researchers presented various approaches to tackle this problem. Nevertheless, most of them use different datasets, often not publicly available to the research community. Therefore, the main focus of this research was to create a chart data extraction algorithm for circular-shaped and grid-like chart types, which will accelerate research in this field and allow uniform result comparison. A large-scale dataset is provided containing 120,000 chart images organized into 20 categories, with corresponding ground truth for each image. Through the undertaken extensive research and to the best of our knowledge, no other author reports the chart data extraction of the sunburst diagrams, heatmaps, and waffle charts. In this research, a new, fully automatic low-level algorithm is also presented that uses a raster image as input and generates an object-oriented structure of the chart of that image. The main novelty of the proposed approach is in chart processing on binary images instead of commonly used pixel counting techniques. The experiments were performed with a synthetic dataset and with real-world chart images. The obtained results demonstrate two things: First, a low-level bottom-up approach can be shared among different chart types. Second, the proposed algorithm achieves superior results on a synthetic dataset. The achieved average data extraction accuracy on the synthetic dataset can be considered state-of-the-art within multiple error rate groups. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

21 pages, 5811 KiB  
Article
On Acquisition Parameters and Processing Techniques for Interparticle Contact Detection in Granular Packings Using Synchrotron Computed Tomography
by Fernando Alvarez-Borges, Sharif Ahmed and Robert C. Atwood
J. Imaging 2022, 8(5), 135; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050135 - 12 May 2022
Viewed by 1958
Abstract
X-ray computed tomography (XCT) is regularly employed in geomechanics to non-destructively measure the solid and pore fractions of soil and rock from reconstructed 3D images. With the increasing availability of high-resolution XCT imaging systems, researchers now seek to measure microfabric parameters such as [...] Read more.
X-ray computed tomography (XCT) is regularly employed in geomechanics to non-destructively measure the solid and pore fractions of soil and rock from reconstructed 3D images. With the increasing availability of high-resolution XCT imaging systems, researchers now seek to measure microfabric parameters such as the number and area of interparticle contacts, which can then be used to inform soil behaviour modelling techniques. However, recent research has evidenced that conventional image processing methods consistently overestimate the number and area of interparticle contacts, mainly due to acquisition-driven image artefacts. The present study seeks to address this issue by systematically assessing the role of XCT acquisition parameters in the accurate detection of interparticle contacts. To this end, synchrotron XCT has been applied to a hexagonal close-packed arrangement of glass pellets with and without a prescribed separation between lattice layers. Different values for the number of projections, exposure time, and rotation range have been evaluated. Conventional global grey value thresholding and novel U-Net segmentation methods have been assessed, followed by local refinements at the presumptive contacts, as per recently proposed contact detection routines. The effect of the different acquisition set-ups and segmentation techniques on contact detection performance is presented and discussed, and optimised workflows are proposed. Full article
(This article belongs to the Special Issue Recent Advances in Image-Based Geotechnics)
Show Figures

Figure 1

11 pages, 30941 KiB  
Article
LightBot: A Multi-Light Position Robotic Acquisition System for Adaptive Capturing of Cultural Heritage Surfaces
by Ramamoorthy Luxman, Yuly Emilia Castro, Hermine Chatoux, Marvin Nurit, Amalia Siatou, Gaëtan Le Goïc, Laura Brambilla, Christian Degrigny, Franck Marzani and Alamin Mansouri
J. Imaging 2022, 8(5), 134; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050134 - 12 May 2022
Cited by 8 | Viewed by 2009
Abstract
Multi-light acquisitions and modeling are well-studied techniques for characterizing surface geometry, widely used in the cultural heritage field. Current systems that are used to perform this kind of acquisition are mainly free-form or dome-based. Both of them have constraints in terms of reproducibility, [...] Read more.
Multi-light acquisitions and modeling are well-studied techniques for characterizing surface geometry, widely used in the cultural heritage field. Current systems that are used to perform this kind of acquisition are mainly free-form or dome-based. Both of them have constraints in terms of reproducibility, limitations on the size of objects being acquired, speed, and portability. This paper presents a novel robotic arm-based system design, which we call LightBot, as well as its applications in reflectance transformation imaging (RTI) in particular. The proposed model alleviates some of the limitations observed in the case of free-form or dome-based systems. It allows the automation and reproducibility of one or a series of acquisitions adapting to a given surface in two-dimensional space. Full article
Show Figures

Figure 1

17 pages, 3478 KiB  
Article
Integration of Deep Learning and Active Shape Models for More Accurate Prostate Segmentation in 3D MR Images
by Massimo Salvi, Bruno De Santi, Bianca Pop, Martino Bosco, Valentina Giannini, Daniele Regge, Filippo Molinari and Kristen M. Meiburger
J. Imaging 2022, 8(5), 133; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050133 - 11 May 2022
Cited by 11 | Viewed by 2757
Abstract
Magnetic resonance imaging (MRI) has a growing role in the clinical workup of prostate cancer. However, manual three-dimensional (3D) segmentation of the prostate is a laborious and time-consuming task. In this scenario, the use of automated algorithms for prostate segmentation allows us to [...] Read more.
Magnetic resonance imaging (MRI) has a growing role in the clinical workup of prostate cancer. However, manual three-dimensional (3D) segmentation of the prostate is a laborious and time-consuming task. In this scenario, the use of automated algorithms for prostate segmentation allows us to bypass the huge workload of physicians. In this work, we propose a fully automated hybrid approach for prostate gland segmentation in MR images using an initial segmentation of prostate volumes using a custom-made 3D deep network (VNet-T2), followed by refinement using an Active Shape Model (ASM). While the deep network focuses on three-dimensional spatial coherence of the shape, the ASM relies on local image information and this joint effort allows for improved segmentation of the organ contours. Our method is developed and tested on a dataset composed of T2-weighted (T2w) MRI prostatic volumes of 60 male patients. In the test set, the proposed method shows excellent segmentation performance, achieving a mean dice score and Hausdorff distance of 0.851 and 7.55 mm, respectively. In the future, this algorithm could serve as an enabling technology for the development of computer-aided systems for prostate cancer characterization in MR imaging. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Show Figures

Figure 1

23 pages, 5783 KiB  
Article
Are Social Networks Watermarking Us or Are We (Unawarely) Watermarking Ourself?
by Flavio Bertini, Rajesh Sharma and Danilo Montesi
J. Imaging 2022, 8(5), 132; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050132 - 10 May 2022
Cited by 4 | Viewed by 2164
Abstract
In the last decade, Social Networks (SNs) have deeply changed many aspects of society, and one of the most widespread behaviours is the sharing of pictures. However, malicious users often exploit shared pictures to create fake profiles, leading to the growth of cybercrime. [...] Read more.
In the last decade, Social Networks (SNs) have deeply changed many aspects of society, and one of the most widespread behaviours is the sharing of pictures. However, malicious users often exploit shared pictures to create fake profiles, leading to the growth of cybercrime. Thus, keeping in mind this scenario, authorship attribution and verification through image watermarking techniques are becoming more and more important. In this paper, we firstly investigate how thirteen of the most popular SNs treat uploaded pictures in order to identify a possible implementation of image watermarking techniques by respective SNs. Second, we test the robustness of several image watermarking algorithms on these thirteen SNs. Finally, we verify whether a method based on the Photo-Response Non-Uniformity (PRNU) technique, which is usually used in digital forensic or image forgery detection activities, can be successfully used as a watermarking approach for authorship attribution and verification of pictures on SNs. The proposed method is sufficiently robust, in spite of the fact that pictures are often downgraded during the process of uploading to the SNs. Moreover, in comparison to conventional watermarking methods the proposed method can successfully pass through different SNs, solving related problems such as profile linking and fake profile detection. The results of our analysis on a real dataset of 8400 pictures show that the proposed method is more effective than other watermarking techniques and can help to address serious questions about privacy and security on SNs. Moreover, the proposed method paves the way for the definition of multi-factor online authentication mechanisms based on robust digital features. Full article
(This article belongs to the Special Issue Visualisation and Cybersecurity)
Show Figures

Figure 1

19 pages, 727 KiB  
Article
BI-RADS BERT and Using Section Segmentation to Understand Radiology Reports
by Grey Kuling, Belinda Curpen and Anne L. Martel
J. Imaging 2022, 8(5), 131; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050131 - 09 May 2022
Cited by 8 | Viewed by 2956
Abstract
Radiology reports are one of the main forms of communication between radiologists and other clinicians, and contain important information for patient care. In order to use this information for research and automated patient care programs, it is necessary to convert the raw text [...] Read more.
Radiology reports are one of the main forms of communication between radiologists and other clinicians, and contain important information for patient care. In order to use this information for research and automated patient care programs, it is necessary to convert the raw text into structured data suitable for analysis. State-of-the-art natural language processing (NLP) domain-specific contextual word embeddings have been shown to achieve impressive accuracy for these tasks in medicine, but have yet to be utilized for section structure segmentation. In this work, we pre-trained a contextual embedding BERT model using breast radiology reports and developed a classifier that incorporated the embedding with auxiliary global textual features in order to perform section segmentation. This model achieved 98% accuracy in segregating free-text reports, sentence by sentence, into sections of information outlined in the Breast Imaging Reporting and Data System (BI-RADS) lexicon, which is a significant improvement over the classic BERT model without auxiliary information. We then evaluated whether using section segmentation improved the downstream extraction of clinically relevant information such as modality/procedure, previous cancer, menopausal status, purpose of exam, breast density, and breast MRI background parenchymal enhancement. Using the BERT model pre-trained on breast radiology reports, combined with section segmentation, resulted in an overall accuracy of 95.9% in the field extraction tasks. This is a 17% improvement, compared to an overall accuracy of 78.9% for field extraction with models using classic BERT embeddings and not using section segmentation. Our work shows the strength of using BERT in the analysis of radiology reports and the advantages of section segmentation by identifying the key features of patient factors recorded in breast radiology reports. Full article
Show Figures

Figure 1

14 pages, 2619 KiB  
Article
Weakly Supervised Tumor Detection in PET Using Class Response for Treatment Outcome Prediction
by Amine Amyar, Romain Modzelewski, Pierre Vera, Vincent Morard and Su Ruan
J. Imaging 2022, 8(5), 130; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050130 - 09 May 2022
Cited by 8 | Viewed by 2200
Abstract
It is proven that radiomic characteristics extracted from the tumor region are predictive. The first step in radiomic analysis is the segmentation of the lesion. However, this task is time consuming and requires a highly trained physician. This process could be automated using [...] Read more.
It is proven that radiomic characteristics extracted from the tumor region are predictive. The first step in radiomic analysis is the segmentation of the lesion. However, this task is time consuming and requires a highly trained physician. This process could be automated using computer-aided detection (CAD) tools. Current state-of-the-art methods are trained in a supervised learning setting, which requires a lot of data that are usually not available in the medical imaging field. The challenge is to train one model to segment different types of tumors with only a weak segmentation ground truth. In this work, we propose a prediction framework including a 3D tumor segmentation in positron emission tomography (PET) images, based on a weakly supervised deep learning method, and an outcome prediction based on a 3D-CNN classifier applied to the segmented tumor regions. The key step is to locate the tumor in 3D. We propose to (1) calculate two maximum intensity projection (MIP) images from 3D PET images in two directions, (2) classify the MIP images into different types of cancers, (3) generate the class activation maps through a multitask learning approach with a weak prior knowledge, and (4) segment the 3D tumor region from the two 2D activation maps with a proposed new loss function for the multitask. The proposed approach achieves state-of-the-art prediction results with a small data set and with a weak segmentation ground truth. Our model was tested and validated for treatment response and survival in lung and esophageal cancers on 195 patients, with an area under the receiver operating characteristic curve (AUC) of 67% and 59%, respectively, and a dice coefficient of 73% and 0.77% for tumor segmentation. Full article
(This article belongs to the Special Issue Radiomics and Texture Analysis in Medical Imaging)
Show Figures

Figure 1

14 pages, 2029 KiB  
Article
Image Classification in JPEG Compression Domain for Malaria Infection Detection
by Yuhang Dong and W. David Pan
J. Imaging 2022, 8(5), 129; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050129 - 03 May 2022
Cited by 2 | Viewed by 1985
Abstract
Digital images are usually stored in compressed format. However, image classification typically takes decompressed images as inputs rather than compressed images. Therefore, performing image classification directly in the compression domain will eliminate the need for decompression, thus increasing efficiency and decreasing costs. However, [...] Read more.
Digital images are usually stored in compressed format. However, image classification typically takes decompressed images as inputs rather than compressed images. Therefore, performing image classification directly in the compression domain will eliminate the need for decompression, thus increasing efficiency and decreasing costs. However, there has been very sparse work on image classification in the compression domain. In this paper, we studied the feasibility of classifying images in their JPEG compression domain. We analyzed the underlying mechanisms of JPEG as an example and conducted classification on data from different stages during the compression. The images we used were malaria-infected red blood cells and normal cells. The training data include multiple combinations of DCT coefficients, DC values in both decimal and binary forms, the “scan” segment in both binary and decimal form, and the variable length of the entire bitstream. The result shows that LSTM can successfully classify the image in its compressed form, with accuracies around 80%. If using only coded DC values, we can achieve accuracies higher than 90%. This indicates that images from different classes can still be well separated in their JPEG compressed format. Our simulations demonstrate that the proposed compression domain-processing method can reduce the input data, and eliminate the image decompression step, thereby achieving significant savings on memory and computation time. Full article
Show Figures

Figure 1

17 pages, 6177 KiB  
Article
Elimination of Defects in Mammograms Caused by a Malfunction of the Device Matrix
by Dmitrii Tumakov, Zufar Kayumov, Alisher Zhumaniezov, Dmitry Chikrin and Diaz Galimyanov
J. Imaging 2022, 8(5), 128; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050128 - 02 May 2022
Cited by 5 | Viewed by 2978
Abstract
Today, the processing and analysis of mammograms is quite an important field of medical image processing. Small defects in images can lead to false conclusions. This is especially true when the distortion occurs due to minor malfunctions in the equipment. In the present [...] Read more.
Today, the processing and analysis of mammograms is quite an important field of medical image processing. Small defects in images can lead to false conclusions. This is especially true when the distortion occurs due to minor malfunctions in the equipment. In the present work, an algorithm for eliminating a defect is proposed, which includes a change in intensity on a mammogram and deteriorations in the contrast of individual areas. The algorithm consists of three stages. The first is the defect identification stage. The second involves improvement and equalization of the contrasts of different parts of the image outside the defect. The third involves restoration of the defect area via a combination of interpolation and an artificial neural network. The mammogram obtained as a result of applying the algorithm shows significantly better image quality and does not contain distortions caused by changes in brightness of the pixels. The resulting images are evaluated using Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) and Naturalness Image Quality Evaluator (NIQE) metrics. In total, 98 radiomics features are extracted from the original and obtained images, and conclusions are drawn about the minimum changes in features between the original image and the image obtained by the proposed algorithm. Full article
Show Figures

Figure 1

21 pages, 2385 KiB  
Article
Airborne Hyperspectral Imagery for Band Selection Using Moth–Flame Metaheuristic Optimization
by Raju Anand, Sathishkumar Samiaappan, Shanmugham Veni, Ethan Worch and Meilun Zhou
J. Imaging 2022, 8(5), 126; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050126 - 27 Apr 2022
Cited by 7 | Viewed by 2155
Abstract
In this research, we study a new metaheuristic algorithm called Moth–Flame Optimization (MFO) for hyperspectral band selection. With the hundreds of highly correlated narrow spectral bands, the number of training samples required to train a statistical classifier is high. Thus, the problem is [...] Read more.
In this research, we study a new metaheuristic algorithm called Moth–Flame Optimization (MFO) for hyperspectral band selection. With the hundreds of highly correlated narrow spectral bands, the number of training samples required to train a statistical classifier is high. Thus, the problem is to select a subset of bands without compromising the classification accuracy. One of the ways to solve this problem is to model an objective function that measures class separability and utilize it to arrive at a subset of bands. In this research, we studied MFO to select optimal spectral bands for classification. MFO is inspired by the behavior of moths with respect to flames, which is the navigation method of moths in nature called transverse orientation. In MFO, a moth navigates the search space through a process called transverse orientation by keeping a constant angle with the Moon, which is a compelling strategy for traveling long distances in a straight line, considering that the Moon’s distance from the moth is considerably long. Our research tested MFO on three benchmark hyperspectral datasets—Indian Pines, University of Pavia, and Salinas. MFO produced an Overall Accuracy (OA) of 88.98%, 94.85%, and 97.17%, respectively, on the three datasets. Our experimental results indicate that MFO produces better OA and Kappa when compared to state-of-the-art band selection algorithms such as particle swarm optimization, grey wolf, cuckoo search, and genetic algorithms. The analysis results prove that the proposed approach effectively addresses the spectral band selection problem and provides a high classification accuracy. Full article
(This article belongs to the Topic Hyperspectral Imaging: Methods and Applications)
Show Figures

Figure 1

27 pages, 2464 KiB  
Review
A Review of Watershed Implementations for Segmentation of Volumetric Images
by Anton Kornilov, Ilia Safonov and Ivan Yakimchuk
J. Imaging 2022, 8(5), 127; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050127 - 26 Apr 2022
Cited by 23 | Viewed by 4160
Abstract
Watershed is a widely used image segmentation algorithm. Most researchers understand just an idea of this method: a grayscale image is considered as topographic relief, which is flooded from initial basins. However, frequently they are not aware of the options of the algorithm [...] Read more.
Watershed is a widely used image segmentation algorithm. Most researchers understand just an idea of this method: a grayscale image is considered as topographic relief, which is flooded from initial basins. However, frequently they are not aware of the options of the algorithm and the peculiarities of its realizations. There are many watershed implementations in software packages and products. Even if these packages are based on the identical algorithm–watershed, by flooding their outcomes, processing speed, and consumed memory, vary greatly. In particular, the difference among various implementations is noticeable for huge volumetric images; for instance, tomographic 3D images, for which low performance and high memory requirements of watershed might be bottlenecks. In our review, we discuss the peculiarities of algorithms with and without waterline generation, the impact of connectivity type and relief quantization level on the result, approaches for parallelization, as well as other method options. We present detailed benchmarking of seven open-source and three commercial software implementations of marker-controlled watershed for semantic or instance segmentation. We compare those software packages for one synthetic and two natural volumetric images. The aim of the review is to provide information and advice for practitioners to select the appropriate version of watershed for their problem solving. In addition, we forecast future directions of software development for 3D image segmentation by watershed. Full article
(This article belongs to the Special Issue Image Segmentation Techniques: Current Status and Future Directions)
Show Figures

Figure 1

13 pages, 5105 KiB  
Article
Colored Point Cloud Completion for a Head Using Adversarial Rendered Image Loss
by Yuki Ishida, Yoshitsugu Manabe and Noriko Yata
J. Imaging 2022, 8(5), 125; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050125 - 26 Apr 2022
Cited by 2 | Viewed by 2405
Abstract
Recent advances in depth measurement and its utilization have made point cloud processing more critical. Additionally, the human head is essential for communication, and its three-dimensional data are expected to be utilized in this regard. However, a single RGB-Depth (RGBD) camera is prone [...] Read more.
Recent advances in depth measurement and its utilization have made point cloud processing more critical. Additionally, the human head is essential for communication, and its three-dimensional data are expected to be utilized in this regard. However, a single RGB-Depth (RGBD) camera is prone to occlusion and depth measurement failure for dark hair colors such as black hair. Recently, point cloud completion, where an entire point cloud is estimated and generated from a partial point cloud, has been studied, but only the shape is learned, rather than the completion of colored point clouds. Thus, this paper proposes a machine learning-based completion method for colored point clouds with XYZ location information and the International Commission on Illumination (CIE) LAB (L*a*b*) color information. The proposed method uses the color difference between point clouds based on the Chamfer Distance (CD) or Earth Mover’s Distance (EMD) of point cloud shape evaluation as a color loss. In addition, an adversarial loss to L*a*b*-Depth images rendered from the output point cloud can improve the visual quality. The experiments examined networks trained using a colored point cloud dataset created by combining two 3D datasets: hairstyles and faces. Experimental results show that using the adversarial loss with the colored point cloud renderer in the proposed method improves the image domain’s evaluation. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

15 pages, 15932 KiB  
Article
Extraction and Calculation of Roadway Area from Satellite Images Using Improved Deep Learning Model and Post-Processing
by Varun Yerram, Hiroyuki Takeshita, Yuji Iwahori, Yoshitsugu Hayashi, M. K. Bhuyan, Shinji Fukui, Boonserm Kijsirikul and Aili Wang
J. Imaging 2022, 8(5), 124; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050124 - 25 Apr 2022
Cited by 6 | Viewed by 3354
Abstract
Roadway area calculation is a novel problem in remote sensing and urban planning. This paper models this problem as a two-step problem, roadway extraction, and area calculation. Roadway extraction from satellite images is a problem that has been tackled many times before. This [...] Read more.
Roadway area calculation is a novel problem in remote sensing and urban planning. This paper models this problem as a two-step problem, roadway extraction, and area calculation. Roadway extraction from satellite images is a problem that has been tackled many times before. This paper proposes a method using pixel resolution to calculate the area of the roads covered in satellite images. The proposed approach uses novel U-net and Resnet architectures called U-net++ and ResNeXt. The state-of-the-art model is combined with the proposed efficient post-processing approach to improve the overlap with ground truth labels. The performance of the proposed road extraction algorithm is evaluated on the Massachusetts dataset and it is shown that the proposed approach outperforms the existing solutions which use models from the U-net family. Full article
(This article belongs to the Special Issue Computer Vision and Deep Learning: Trends and Applications)
Show Figures

Figure 1

31 pages, 603 KiB  
Review
Microwave Imaging for Early Breast Cancer Detection: Current State, Challenges, and Future Directions
by Nour AlSawaftah, Salma El-Abed, Salam Dhou and Amer Zakaria
J. Imaging 2022, 8(5), 123; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050123 - 23 Apr 2022
Cited by 48 | Viewed by 6917
Abstract
Breast cancer is the most commonly diagnosed cancer type and is the leading cause of cancer-related death among females worldwide. Breast screening and early detection are currently the most successful approaches for the management and treatment of this disease. Several imaging modalities are [...] Read more.
Breast cancer is the most commonly diagnosed cancer type and is the leading cause of cancer-related death among females worldwide. Breast screening and early detection are currently the most successful approaches for the management and treatment of this disease. Several imaging modalities are currently utilized for detecting breast cancer, of which microwave imaging (MWI) is gaining quite a lot of attention as a promising diagnostic tool for early breast cancer detection. MWI is a noninvasive, relatively inexpensive, fast, convenient, and safe screening tool. The purpose of this paper is to provide an up-to-date survey of the principles, developments, and current research status of MWI for breast cancer detection. This paper is structured into two sections; the first is an overview of current MWI techniques used for detecting breast cancer, followed by an explanation of the working principle behind MWI and its various types, namely, microwave tomography and radar-based imaging. In the second section, a review of the initial experiments along with more recent studies on the use of MWI for breast cancer detection is presented. Furthermore, the paper summarizes the challenges facing MWI as a breast cancer detection tool and provides future research directions. On the whole, MWI has proven its potential as a screening tool for breast cancer detection, both as a standalone or complementary technique. However, there are a few challenges that need to be addressed to unlock the full potential of this imaging modality and translate it to clinical settings. Full article
(This article belongs to the Special Issue Intelligent Strategies for Medical Image Analysis)
Show Figures

Figure 1

18 pages, 12874 KiB  
Article
Surreptitious Adversarial Examples through Functioning QR Code
by Aran Chindaudom, Prarinya Siritanawan, Karin Sumongkayothin and Kazunori Kotani
J. Imaging 2022, 8(5), 122; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050122 - 22 Apr 2022
Cited by 2 | Viewed by 2527
Abstract
The continuous advances in the technology of Convolutional Neural Network (CNN) and Deep Learning have been applied to facilitate various tasks of human life. However, security risks of the users’ information and privacy have been increasing rapidly due to the models’ vulnerabilities. We [...] Read more.
The continuous advances in the technology of Convolutional Neural Network (CNN) and Deep Learning have been applied to facilitate various tasks of human life. However, security risks of the users’ information and privacy have been increasing rapidly due to the models’ vulnerabilities. We have developed a novel method of adversarial attack that can conceal its intent from human intuition through the use of a modified QR code. The modified QR code can be consistently scanned with a reader while retaining adversarial efficacy against image classification models. The QR adversarial patch was created and embedded into an input image to generate adversarial examples, which were trained against CNN image classification models. Experiments were performed to investigate the trade-off in different patch shapes and find the patch’s optimal balance of scannability and adversarial efficacy. Furthermore, we have investigated whether particular classes of images are more resistant or vulnerable to the adversarial QR attack, and we also investigated the generality of the adversarial attack across different image classification models. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

16 pages, 1555 KiB  
Article
Weakly Supervised Polyp Segmentation in Colonoscopy Images Using Deep Neural Networks
by Siwei Chen, Gregor Urban and Pierre Baldi
J. Imaging 2022, 8(5), 121; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050121 - 22 Apr 2022
Cited by 6 | Viewed by 3034
Abstract
Colorectal cancer (CRC) is a leading cause of mortality worldwide, and preventive screening modalities such as colonoscopy have been shown to noticeably decrease CRC incidence and mortality. Improving colonoscopy quality remains a challenging task due to limiting factors including the training levels of [...] Read more.
Colorectal cancer (CRC) is a leading cause of mortality worldwide, and preventive screening modalities such as colonoscopy have been shown to noticeably decrease CRC incidence and mortality. Improving colonoscopy quality remains a challenging task due to limiting factors including the training levels of colonoscopists and the variability in polyp sizes, morphologies, and locations. Deep learning methods have led to state-of-the-art systems for the identification of polyps in colonoscopy videos. In this study, we show that deep learning can also be applied to the segmentation of polyps in real time, and the underlying models can be trained using mostly weakly labeled data, in the form of bounding box annotations that do not contain precise contour information. A novel dataset, Polyp-Box-Seg of 4070 colonoscopy images with polyps from over 2000 patients, is collected, and a subset of 1300 images is manually annotated with segmentation masks. A series of models is trained to evaluate various strategies that utilize bounding box annotations for segmentation tasks. A model trained on the 1300 polyp images with segmentation masks achieves a dice coefficient of 81.52%, which improves significantly to 85.53% when using a weakly supervised strategy leveraging bounding box images. The Polyp-Box-Seg dataset, together with a real-time video demonstration of the segmentation system, are publicly available. Full article
(This article belongs to the Special Issue Advances in Deep Neural Networks for Visual Pattern Recognition)
Show Figures

Figure 1

13 pages, 1009 KiB  
Article
Time Synchronization of Multimodal Physiological Signals through Alignment of Common Signal Types and Its Technical Considerations in Digital Health
by Ran Xiao, Cheng Ding and Xiao Hu
J. Imaging 2022, 8(5), 120; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8050120 - 21 Apr 2022
Cited by 5 | Viewed by 1969
Abstract
Background: Despite advancements in digital health, it remains challenging to obtain precise time synchronization of multimodal physiological signals collected through different devices. Existing algorithms mainly rely on specific physiological features that restrict the use cases to certain signal types. The present study aims [...] Read more.
Background: Despite advancements in digital health, it remains challenging to obtain precise time synchronization of multimodal physiological signals collected through different devices. Existing algorithms mainly rely on specific physiological features that restrict the use cases to certain signal types. The present study aims to complement previous algorithms and solve a niche time alignment problem when a common signal type is available across different devices. Methods: We proposed a simple time alignment approach based on the direct cross-correlation of temporal amplitudes, making it agnostic and thus generalizable to different signal types. The approach was tested on a public electrocardiographic (ECG) dataset to simulate the synchronization of signals collected from an ECG watch and an ECG patch. The algorithm was evaluated considering key practical factors, including sample durations, signal quality index (SQI), resilience to noise, and varying sampling rates. Results: The proposed approach requires a short sample duration (30 s) to operate, and demonstrates stable performance across varying sampling rates and resilience to common noise. The lowest synchronization delay achieved by the algorithm is 0.13 s with the integration of SQI thresholding. Conclusions: Our findings help improve the time alignment of multimodal signals in digital health and advance healthcare toward precise remote monitoring and disease prevention. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop