Next Issue
Volume 7, April
Previous Issue
Volume 7, February
 
 

J. Imaging, Volume 7, Issue 3 (March 2021) – 20 articles

Cover Story (view full-size image): Visual features have experienced huge advances in the last decade. However, even in the era of deep learning, retrieval systems often perform comparisons by computing measures that consider only pairs of images and ignore the relevant information encoded in the relationships among images. To go beyond pairwise analysis, post-processing methods have been proposed. Among them, two categories can be highlighted as very representative: diffusion processes and rank-based approaches. In this paper, an efficient rank-based diffusion process is proposed, combining both approaches and avoiding the drawbacks of each. The method is capable of approximating a diffusion process based only on the top positions of ranked lists, while ensures its convergence. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
27 pages, 733 KiB  
Article
An Ontological Framework to Facilitate Early Detection of ‘Radicalization’ (OFEDR)—A Three World Perspective
by Linda Wendelberg
J. Imaging 2021, 7(3), 60; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030060 - 22 Mar 2021
Cited by 4 | Viewed by 2955
Abstract
This paper presents an ontology that involves using information from various sources from different disciplines and combining it in order to predict whether a given person is in a radicalization process. The purpose of the ontology is to improve the early detection of [...] Read more.
This paper presents an ontology that involves using information from various sources from different disciplines and combining it in order to predict whether a given person is in a radicalization process. The purpose of the ontology is to improve the early detection of radicalization in persons, thereby contributing to increasing the extent to which the unwanted escalation of radicalization processes can be prevented. The ontology combines findings related to existential anxiety that are related to political radicalization with well-known criminal profiles or radicalization findings. The software Protégé, delivered by the technical field at Stanford University, including the SPARQL tab, is used to develop and test the ontology. The testing, which involved five models, showed that the ontology could detect individuals according to “risk profiles” for subjects based on existential anxiety. SPARQL queries showed an average detection probability of 5% including only a risk population and 2% on a whole test population. Testing by using machine learning algorithms proved that inclusion of less than four variables in each model produced unreliable results. This suggest that the Ontology Framework to Facilitate Early Detection of ‘Radicalization’ (OFEDR) ontology risk model should consist of at least four variables to reach a certain level of reliability. Analysis shows that use of a probability based on an estimated risk of terrorism may produce a gap between the number of subjects who actually have early signs of radicalization and those found by using probability estimates for extremely rare events. It is reasoned that an ontology exists as a world three object in the real world. Full article
Show Figures

Figure 1

16 pages, 1455 KiB  
Article
Copy-Move Forgery Detection (CMFD) Using Deep Learning for Image and Video Forensics
by Yohanna Rodriguez-Ortega, Dora M. Ballesteros and Diego Renza
J. Imaging 2021, 7(3), 59; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030059 - 20 Mar 2021
Cited by 43 | Viewed by 6083
Abstract
With the exponential growth of high-quality fake images in social networks and media, it is necessary to develop recognition algorithms for this type of content. One of the most common types of image and video editing consists of duplicating areas of the image, [...] Read more.
With the exponential growth of high-quality fake images in social networks and media, it is necessary to develop recognition algorithms for this type of content. One of the most common types of image and video editing consists of duplicating areas of the image, known as the copy-move technique. Traditional image processing approaches manually look for patterns related to the duplicated content, limiting their use in mass data classification. In contrast, approaches based on deep learning have shown better performance and promising results, but they present generalization problems with a high dependence on training data and the need for appropriate selection of hyperparameters. To overcome this, we propose two approaches that use deep learning, a model by a custom architecture and a model by transfer learning. In each case, the impact of the depth of the network is analyzed in terms of precision (P), recall (R) and F1 score. Additionally, the problem of generalization is addressed with images from eight different open access datasets. Finally, the models are compared in terms of evaluation metrics, and training and inference times. The model by transfer learning of VGG-16 achieves metrics about 10% higher than the model by a custom architecture, however, it requires approximately twice as much inference time as the latter. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

20 pages, 6782 KiB  
Article
Calibration-Less Multi-Coil Compressed Sensing Magnetic Resonance Image Reconstruction Based on OSCAR Regularization
by Loubna El Gueddari, Chaithya Giliyar Radhakrishna, Emilie Chouzenoux and Philippe Ciuciu
J. Imaging 2021, 7(3), 58; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030058 - 19 Mar 2021
Cited by 4 | Viewed by 2414
Abstract
Over the last decade, the combination of compressed sensing (CS) with acquisition over multiple receiver coils in magnetic resonance imaging (MRI) has allowed the emergence of faster scans while maintaining a good signal-to-noise ratio (SNR). Self-calibrating techniques, such as ESPiRIT, have become the [...] Read more.
Over the last decade, the combination of compressed sensing (CS) with acquisition over multiple receiver coils in magnetic resonance imaging (MRI) has allowed the emergence of faster scans while maintaining a good signal-to-noise ratio (SNR). Self-calibrating techniques, such as ESPiRIT, have become the standard approach to estimating the coil sensitivity maps prior to the reconstruction stage. In this work, we proceed differently and introduce a new calibration-less multi-coil CS reconstruction method. Calibration-less techniques no longer require the prior extraction of sensitivity maps to perform multi-coil image reconstruction but usually alternate estimation sensitivity map estimation and image reconstruction. Here, to get rid of the nonconvexity of the latter approach we reconstruct as many MR images as the number of coils. To compensate for the ill-posedness of this inverse problem, we leverage structured sparsity of the multi-coil images in a wavelet transform domain while adapting to variations in SNR across coils owing to the OSCAR (octagonal shrinkage and clustering algorithm for regression) regularization. Coil-specific complex-valued MR images are thus obtained by minimizing a convex but nonsmooth objective function using the proximal primal-dual Condat-Vù algorithm. Comparison and validation on retrospective Cartesian and non-Cartesian studies based on the Brain fastMRI data set demonstrate that the proposed reconstruction method outperforms the state-of-the-art (1-ESPIRiT, calibration-less AC-LORAKS and CaLM methods) significantly on magnitude images for the T1 and FLAIR contrasts. Additionally, further validation operated on 8 to 20-fold prospectively accelerated high-resolution ex vivo human brain MRI data collected at 7 Tesla confirms the retrospective results. Overall, OSCAR-based regularization preserves phase information more accurately (both visually and quantitatively) compared to other approaches, an asset that can only be assessed on real prospective experiments. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

12 pages, 4774 KiB  
Review
NEURAP—A Dedicated Neutron-Imaging Facility for Highly Radioactive Samples
by Eberhard Lehmann, Knud Thomsen, Markus Strobl, Pavel Trtik, Johannes Bertsch and Yong Dai
J. Imaging 2021, 7(3), 57; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030057 - 16 Mar 2021
Cited by 3 | Viewed by 2015
Abstract
NEURAP is a dedicated set-up at the Swiss neutron spallation source (SINQ) at the Paul Scherrer Institut (PSI), optionally implemented as a special configuration of the neutron-imaging station NEUTRA. It is one of very few instrumentations available worldwide enabling neutron-imaging of highly radioactive [...] Read more.
NEURAP is a dedicated set-up at the Swiss neutron spallation source (SINQ) at the Paul Scherrer Institut (PSI), optionally implemented as a special configuration of the neutron-imaging station NEUTRA. It is one of very few instrumentations available worldwide enabling neutron-imaging of highly radioactive samples to be performed routinely, with special precautions and following a specific procedure. Since the relevant objects are strong γ-sources, dedicated techniques are needed to handle the samples and to perform neutron-imaging despite the radiation background. Dysprosium (Dy)-loaded imaging plates, effectively made sensitive to neutrons only, are employed. Neutrons are captured by Dy during neutron irradiation. Then the imaging plate is erased removing gamma detections. A subsequent relatively long self-exposure by the radiation from the intrinsic neutron-activated Dy within the imaging plate yields the neutron-only radiograph that is finally read out. During more than 20 years of NEURAP operation, images have been obtained for two major applications: (a) highly radioactive SINQ target components were investigated after long-term operation life; and (b) spent fuel rods and their cladding from Swiss nuclear power plants were characterized. Quantitative analysis of the image data demonstrated the accumulation of spallation products in the lead filled “Cannelloni” Zircaloy tubes of the SINQ target and the aggregation of hydrogen at specific sites in used fuel pins of power plants and their cladding, respectively. These results continue to help understanding material degradation and optimizing the operational regimes, which might lead to extending the safe lifetimes of these components. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

16 pages, 14577 KiB  
Article
Smoothed Shock Filtering: Algorithm and Applications
by Antoine Vacavant
J. Imaging 2021, 7(3), 56; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030056 - 15 Mar 2021
Cited by 2 | Viewed by 2135
Abstract
This article presents the smoothed shock filter, which iteratively produces local segmentations in image’s inflection zones with smoothed morphological operators (dilations, erosions). Hence, it enhances contours by creating smoothed ruptures, while preserving homogeneous regions. After describing the algorithm, we show that it is [...] Read more.
This article presents the smoothed shock filter, which iteratively produces local segmentations in image’s inflection zones with smoothed morphological operators (dilations, erosions). Hence, it enhances contours by creating smoothed ruptures, while preserving homogeneous regions. After describing the algorithm, we show that it is a robust approach for denoising, compared to related works. Then, we expose how we exploited this filter as a pre-processing step in different image analysis tasks (medical image segmentation, fMRI, and texture classification). By means of its ability to enhance important patterns in images, the smoothed shock filter has a real positive impact upon such applications, for which we would like to explore it more in the future. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Journal of Imaging Editorial Board Members)
Show Figures

Figure 1

21 pages, 22815 KiB  
Article
An Efficient Method for No-Reference Video Quality Assessment
by Mirko Agarla, Luigi Celona and Raimondo Schettini
J. Imaging 2021, 7(3), 55; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030055 - 13 Mar 2021
Cited by 13 | Viewed by 2711
Abstract
Methods for No-Reference Video Quality Assessment (NR-VQA) of consumer-produced video content are largely investigated due to the spread of databases containing videos affected by natural distortions. In this work, we design an effective and efficient method for NR-VQA. The proposed method exploits a [...] Read more.
Methods for No-Reference Video Quality Assessment (NR-VQA) of consumer-produced video content are largely investigated due to the spread of databases containing videos affected by natural distortions. In this work, we design an effective and efficient method for NR-VQA. The proposed method exploits a novel sampling module capable of selecting a predetermined number of frames from the whole video sequence on which to base the quality assessment. It encodes both the quality attributes and semantic content of video frames using two lightweight Convolutional Neural Networks (CNNs). Then, it estimates the quality score of the entire video using a Support Vector Regressor (SVR). We compare the proposed method against several relevant state-of-the-art methods using four benchmark databases containing user generated videos (CVD2014, KoNViD-1k, LIVE-Qualcomm, and LIVE-VQC). The results show that the proposed method at a substantially lower computational cost predicts subjective video quality in line with the state of the art methods on individual databases and generalizes better than existing methods in cross-database setup. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Journal of Imaging Editorial Board Members)
Show Figures

Figure 1

12 pages, 4293 KiB  
Article
Geometry Calibration of a Modular Stereo Cone-Beam X-ray CT System
by Van Nguyen, Joaquim G. Sanctorum, Sam Van Wassenbergh, Joris J. J. Dirckx, Jan Sijbers and Jan De Beenhouwer
J. Imaging 2021, 7(3), 54; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030054 - 13 Mar 2021
Cited by 8 | Viewed by 3261
Abstract
Compared to single source systems, stereo X-ray CT systems allow acquiring projection data within a reduced amount of time, for an extended field-of-view, or for dual X-ray energies. To exploit the benefit of a dual X-ray system, its acquisition geometry needs to be [...] Read more.
Compared to single source systems, stereo X-ray CT systems allow acquiring projection data within a reduced amount of time, for an extended field-of-view, or for dual X-ray energies. To exploit the benefit of a dual X-ray system, its acquisition geometry needs to be calibrated. Unfortunately, in modular stereo X-ray CT setups, geometry misalignment occurs each time the setup is changed, which calls for an efficient calibration procedure. Although many studies have been dealing with geometry calibration of an X-ray CT system, little research targets the calibration of a dual cone-beam X-ray CT system. In this work, we present a phantom-based calibration procedure to accurately estimate the geometry of a stereo cone-beam X-ray CT system. With simulated as well as real experiments, it is shown that the calibration procedure can be used to accurately estimate the geometry of a modular stereo X-ray CT system thereby reducing the misalignment artifacts in the reconstruction volumes. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Journal of Imaging Editorial Board Members)
Show Figures

Figure 1

17 pages, 2548 KiB  
Article
Analysis of Diagnostic Images of Artworks and Feature Extraction: Design of a Methodology
by Annamaria Amura, Alessandro Aldini, Stefano Pagnotta, Emanuele Salerno, Anna Tonazzini and Paolo Triolo
J. Imaging 2021, 7(3), 53; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030053 - 12 Mar 2021
Cited by 9 | Viewed by 2547
Abstract
Digital images represent the primary tool for diagnostics and documentation of the state of preservation of artifacts. Today the interpretive filters that allow one to characterize information and communicate it are extremely subjective. Our research goal is to study a quantitative analysis methodology [...] Read more.
Digital images represent the primary tool for diagnostics and documentation of the state of preservation of artifacts. Today the interpretive filters that allow one to characterize information and communicate it are extremely subjective. Our research goal is to study a quantitative analysis methodology to facilitate and semi-automate the recognition and polygonization of areas corresponding to the characteristics searched. To this end, several algorithms have been tested that allow for separating the characteristics and creating binary masks to be statistically analyzed and polygonized. Since our methodology aims to offer a conservator-restorer model to obtain useful graphic documentation in a short time that is usable for design and statistical purposes, this process has been implemented in a single Geographic Information Systems (GIS) application. Full article
(This article belongs to the Special Issue Fine Art Pattern Extraction and Recognition)
Show Figures

Figure 1

45 pages, 8398 KiB  
Review
A Review of Detection and Removal of Raindrops in Automotive Vision Systems
by Yazan Hamzeh and Samir A. Rawashdeh
J. Imaging 2021, 7(3), 52; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030052 - 10 Mar 2021
Cited by 12 | Viewed by 3848
Abstract
Research on the effect of adverse weather conditions on the performance of vision-based algorithms for automotive tasks has had significant interest. It is generally accepted that adverse weather conditions reduce the quality of captured images and have a detrimental effect on the performance [...] Read more.
Research on the effect of adverse weather conditions on the performance of vision-based algorithms for automotive tasks has had significant interest. It is generally accepted that adverse weather conditions reduce the quality of captured images and have a detrimental effect on the performance of algorithms that rely on these images. Rain is a common and significant source of image quality degradation. Adherent rain on a vehicle’s windshield in the camera’s field of view causes distortion that affects a wide range of essential automotive perception tasks, such as object recognition, traffic sign recognition, localization, mapping, and other advanced driver assist systems (ADAS) and self-driving features. As rain is a common occurrence and as these systems are safety-critical, algorithm reliability in the presence of rain and potential countermeasures must be well understood. This survey paper describes the main techniques for detecting and removing adherent raindrops from images that accumulate on the protective cover of cameras. Full article
Show Figures

Figure 1

21 pages, 2795 KiB  
Article
Two Ensemble-CNN Approaches for Colorectal Cancer Tissue Type Classification
by Emanuela Paladini, Edoardo Vantaggiato, Fares Bougourzi, Cosimo Distante, Abdenour Hadid and Abdelmalik Taleb-Ahmed
J. Imaging 2021, 7(3), 51; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030051 - 09 Mar 2021
Cited by 31 | Viewed by 4249
Abstract
In recent years, automatic tissue phenotyping has attracted increasing interest in the Digital Pathology (DP) field. For Colorectal Cancer (CRC), tissue phenotyping can diagnose the cancer and differentiate between different cancer grades. The development of Whole Slide Images (WSIs) has provided the required [...] Read more.
In recent years, automatic tissue phenotyping has attracted increasing interest in the Digital Pathology (DP) field. For Colorectal Cancer (CRC), tissue phenotyping can diagnose the cancer and differentiate between different cancer grades. The development of Whole Slide Images (WSIs) has provided the required data for creating automatic tissue phenotyping systems. In this paper, we study different hand-crafted feature-based and deep learning methods using two popular multi-classes CRC-tissue-type databases: Kather-CRC-2016 and CRC-TP. For the hand-crafted features, we use two texture descriptors (LPQ and BSIF) and their combination. In addition, two classifiers are used (SVM and NN) to classify the texture features into distinct CRC tissue types. For the deep learning methods, we evaluate four Convolutional Neural Network (CNN) architectures (ResNet-101, ResNeXt-50, Inception-v3, and DenseNet-161). Moreover, we propose two Ensemble CNN approaches: Mean-Ensemble-CNN and NN-Ensemble-CNN. The experimental results show that the proposed approaches outperformed the hand-crafted feature-based methods, CNN architectures and the state-of-the-art methods in both databases. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Journal of Imaging Editorial Board Members)
Show Figures

Figure 1

23 pages, 9519 KiB  
Article
VIPPrint: Validating Synthetic Image Detection and Source Linking Methods on a Large Scale Dataset of Printed Documents
by Anselmo Ferreira, Ehsan Nowroozi and Mauro Barni
J. Imaging 2021, 7(3), 50; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030050 - 08 Mar 2021
Cited by 13 | Viewed by 3487
Abstract
The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, [...] Read more.
The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned. A problem hindering research in this area is the lack of large scale reference datasets to be used for algorithm development and benchmarking. Motivated by this issue, we present a new dataset composed of a large number of synthetic and natural printed face images. To highlight the difficulties associated with the analysis of the images of the dataset, we carried out an extensive set of experiments comparing several printer attribution methods. We also verified that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images. We envision that the availability of the new dataset and the preliminary experiments we carried out will motivate and facilitate further research in this area. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

23 pages, 3037 KiB  
Article
Efficient Rank-Based Diffusion Process with Assured Convergence
by Daniel Carlos Guimarães Pedronette, Lucas Pascotti Valem and Longin Jan Latecki
J. Imaging 2021, 7(3), 49; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030049 - 08 Mar 2021
Cited by 6 | Viewed by 2211
Abstract
Visual features and representation learning strategies experienced huge advances in the previous decade, mainly supported by deep learning approaches. However, retrieval tasks are still performed mainly based on traditional pairwise dissimilarity measures, while the learned representations lie on high dimensional manifolds. With the [...] Read more.
Visual features and representation learning strategies experienced huge advances in the previous decade, mainly supported by deep learning approaches. However, retrieval tasks are still performed mainly based on traditional pairwise dissimilarity measures, while the learned representations lie on high dimensional manifolds. With the aim of going beyond pairwise analysis, post-processing methods have been proposed to replace pairwise measures by globally defined measures, capable of analyzing collections in terms of the underlying data manifold. The most representative approaches are diffusion and ranked-based methods. While the diffusion approaches can be computationally expensive, the rank-based methods lack theoretical background. In this paper, we propose an efficient Rank-based Diffusion Process which combines both approaches and avoids the drawbacks of each one. The obtained method is capable of efficiently approximating a diffusion process by exploiting rank-based information, while assuring its convergence. The algorithm exhibits very low asymptotic complexity and can be computed regionally, being suitable to outside of dataset queries. An experimental evaluation conducted for image retrieval and person re-ID tasks on diverse datasets demonstrates the effectiveness of the proposed approach with results comparable to the state-of-the-art. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Journal of Imaging Editorial Board Members)
Show Figures

Figure 1

24 pages, 8397 KiB  
Article
Design and Comparison of Image Hashing Methods: A Case Study on Cork Stopper Unique Identification
by Ricardo Fitas, Bernardo Rocha, Valter Costa and Armando Sousa
J. Imaging 2021, 7(3), 48; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030048 - 08 Mar 2021
Cited by 4 | Viewed by 2265
Abstract
Cork stoppers were shown to have unique characteristics that allow their use for authentication purposes in an anti-counterfeiting effort. This authentication process relies on the comparison between a user’s cork image and all registered cork images in the database of genuine items. With [...] Read more.
Cork stoppers were shown to have unique characteristics that allow their use for authentication purposes in an anti-counterfeiting effort. This authentication process relies on the comparison between a user’s cork image and all registered cork images in the database of genuine items. With the growth of the database, this one-to-many comparison method becomes lengthier and therefore usefulness decreases. To tackle this problem, the present work designs and compares hashing-assisted image matching methods that can be used in cork stopper authentication. The analyzed approaches are the discrete cosine transform, wavelet transform, Radon transform, and other methods such as difference hash and average hash. The most successful approach uses a 1024-bit hash length and difference hash method providing a 98% accuracy rate. By transforming the image matching into a hash matching problem, the approach presented becomes almost 40 times faster when compared to the literature. Full article
Show Figures

Figure 1

26 pages, 4262 KiB  
Article
Detecting and Locating Passive Video Forgery Based on Low Computational Complexity Third-Order Tensor Representation
by Yasmin M. Alsakar, Nagham E. Mekky and Noha A. Hikal
J. Imaging 2021, 7(3), 47; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030047 - 05 Mar 2021
Cited by 7 | Viewed by 2380
Abstract
Great attention is paid to detecting video forgeries nowadays, especially with the widespread sharing of videos over social media and websites. Many video editing software programs are available and perform well in tampering with video contents or even creating fake videos. Forgery affects [...] Read more.
Great attention is paid to detecting video forgeries nowadays, especially with the widespread sharing of videos over social media and websites. Many video editing software programs are available and perform well in tampering with video contents or even creating fake videos. Forgery affects video integrity and authenticity and has serious implications. For example, digital videos for security and surveillance purposes are used as evidence in courts. In this paper, a newly developed passive video forgery scheme is introduced and discussed. The developed scheme is based on representing highly correlated video data with a low computational complexity third-order tensor tube-fiber mode. An arbitrary number of core tensors is selected to detect and locate two serious types of forgeries which are: insertion and deletion. These tensor data are orthogonally transformed to achieve more data reductions and to provide good features to trace forgery along the whole video. Experimental results and comparisons show the superiority of the proposed scheme with a precision value of up to 99% in detecting and locating both types of attacks for static as well as dynamic videos, quick-moving foreground items (single or multiple), zooming in and zooming out datasets which are rarely tested by previous works. Moreover, the proposed scheme offers a reduction in time and a linear computational complexity. Based on the used computer’s configurations, an average time of 35 s. is needed to detect and locate 40 forged frames out of 300 frames. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

20 pages, 17898 KiB  
Article
Image Enhanced Mask R-CNN: A Deep Learning Pipeline with New Evaluation Measures for Wind Turbine Blade Defect Detection and Classification
by Jiajun Zhang, Georgina Cosma and Jason Watkins
J. Imaging 2021, 7(3), 46; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030046 - 04 Mar 2021
Cited by 29 | Viewed by 7094
Abstract
Demand for wind power has grown, and this has increased wind turbine blade (WTB) inspections and defect repairs. This paper empirically investigates the performance of state-of-the-art deep learning algorithms, namely, YOLOv3, YOLOv4, and Mask R-CNN for detecting and classifying defects by type. The [...] Read more.
Demand for wind power has grown, and this has increased wind turbine blade (WTB) inspections and defect repairs. This paper empirically investigates the performance of state-of-the-art deep learning algorithms, namely, YOLOv3, YOLOv4, and Mask R-CNN for detecting and classifying defects by type. The paper proposes new performance evaluation measures suitable for defect detection tasks, and these are: Prediction Box Accuracy, Recognition Rate, and False Label Rate. Experiments were carried out using a dataset, provided by the industrial partner, that contains images from WTB inspections. Three variations of the dataset were constructed using different image augmentation settings. Results of the experiments revealed that on average, across all proposed evaluation measures, Mask R-CNN outperformed all other algorithms when transformation-based augmentations (i.e., rotation and flipping) were applied. In particular, when using the best dataset, the mean Weighted Average (mWA) values (i.e., mWA is the average of the proposed measures) achieved were: Mask R-CNN: 86.74%, YOLOv3: 70.08%, and YOLOv4: 78.28%. The paper also proposes a new defect detection pipeline, called Image Enhanced Mask R-CNN (IE Mask R-CNN), that includes the best combination of image enhancement and augmentation techniques for pre-processing the dataset, and a Mask R-CNN model tuned for the task of WTB defect detection and classification. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Journal of Imaging Editorial Board Members)
Show Figures

Graphical abstract

16 pages, 5280 KiB  
Article
Covariate Model of Pixel Vector Intensities of Invasive H. sosnowskyi Plants
by Ignas Daugela, Jurate Suziedelyte Visockiene, Egle Tumeliene, Jonas Skeivalas and Maris Kalinka
J. Imaging 2021, 7(3), 45; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030045 - 03 Mar 2021
Viewed by 1462
Abstract
This article describes an agricultural application of remote sensing methods. The idea is to aid in eradicating an invasive plant called Sosnowskyi borscht (H. sosnowskyi). These plants contain strong allergens and can induce burning skin pain, and may displace native plant [...] Read more.
This article describes an agricultural application of remote sensing methods. The idea is to aid in eradicating an invasive plant called Sosnowskyi borscht (H. sosnowskyi). These plants contain strong allergens and can induce burning skin pain, and may displace native plant species by overshadowing them, meaning that even solitary individuals must be controlled or destroyed in order to prevent damage to unused rural land and other neighbouring land of various types (mostly violated forest or housing areas). We describe several methods for detecting H. sosnowskyi plants from Sentinel-2A images, and verify our results. The workflow is based on recently improved technologies, which are used to pinpoint exact locations (small areas) of plants, allowing them to be found more efficiently than by visual inspection on foot or by car. The results are in the form of images that can be classified by several methods, and estimates of the cross-covariance or single-vector auto-covariance functions of the contaminant parameters are calculated from random functions composed of plant pixel vector data arrays. The correlation of the pixel vectors for H. sosnowskyi images depends on the density of the chlorophyll content in the plants. Estimates of the covariance functions were computed by varying the quantisation interval on a certain time scale and using a computer programme based on MATLAB. The correlation between the pixels of the H. sosnowskyi plants and other plants was found, possibly because their structures have sufficiently unique spectral signatures (pixel values) in raster images. H. sosnowskyi can be identified and confirmed using a combination of two classification methods (using supervised and unsupervised approaches). The reliability of this combined method was verified by applying the theory of covariance function, and the results showed that H. sosnowskyi plants had a higher correlation coefficient. This can be used to improve the results in order to get rid of plants in particular areas. Further experiments will be carried out to confirm these results based on in situ fieldwork, and to calculate the efficiency of our method. Full article
Show Figures

Figure 1

49 pages, 3950 KiB  
Article
Quantitative Comparison of Deep Learning-Based Image Reconstruction Methods for Low-Dose and Sparse-Angle CT Applications
by Johannes Leuschner, Maximilian Schmidt, Poulami Somanya Ganguly, Vladyslav Andriiashen, Sophia Bethany Coban, Alexander Denker, Dominik Bauer, Amir Hadjifaradji, Kees Joost Batenburg, Peter Maass and Maureen van Eijnatten
J. Imaging 2021, 7(3), 44; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030044 - 02 Mar 2021
Cited by 23 | Viewed by 6961
Abstract
The reconstruction of computed tomography (CT) images is an active area of research. Following the rise of deep learning methods, many data-driven models have been proposed in recent years. In this work, we present the results of a data challenge that we organized, [...] Read more.
The reconstruction of computed tomography (CT) images is an active area of research. Following the rise of deep learning methods, many data-driven models have been proposed in recent years. In this work, we present the results of a data challenge that we organized, bringing together algorithm experts from different institutes to jointly work on quantitative evaluation of several data-driven methods on two large, public datasets during a ten day sprint. We focus on two applications of CT, namely, low-dose CT and sparse-angle CT. This enables us to fairly compare different methods using standardized settings. As a general result, we observe that the deep learning-based methods are able to improve the reconstruction quality metrics in both CT applications while the top performing methods show only minor differences in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). We further discuss a number of other important criteria that should be taken into account when selecting a method, such as the availability of training data, the knowledge of the physical measurement model and the reconstruction speed. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Show Figures

Figure 1

8 pages, 1595 KiB  
Brief Report
Validation of Image Qualities of a Novel Four-Mice Bed PET System as an Oncological and Neurological Analysis Tool
by Kyung Jun Kang, Se Jong Oh, Kyung Rok Nam, Heesu Ahn, Ji-Ae Park, Kyo Chul Lee and Jae Yong Choi
J. Imaging 2021, 7(3), 43; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030043 - 26 Feb 2021
Cited by 3 | Viewed by 2021
Abstract
Background: Micro-positron emission tomography (micro-PET), a small-animal dedicated PET system, is used in biomedical studies and has the quantitative imaging capabilities of radiotracers. A single-bed system, commonly used in micro-PET, is laborious to use in large-scale studies. Here, we evaluated the image [...] Read more.
Background: Micro-positron emission tomography (micro-PET), a small-animal dedicated PET system, is used in biomedical studies and has the quantitative imaging capabilities of radiotracers. A single-bed system, commonly used in micro-PET, is laborious to use in large-scale studies. Here, we evaluated the image qualities of a multi-bed system. Methods: Phantom imaging studies were performed to assess the recovery coefficients (RCs), uniformity, and spill-over ratios (SORs) in water- and air-filled chambers. 18F-FDG and 18F-FPEB PET images of xenograft and normal mice from the multi-bed and single-bed systems were compared. Results: For small diameters (< 3 mm), the RC values between the two systems differed significantly. However, for large diameters (> 4 mm), there were no differences in RC values between the two systems. Uniformity and SORs of both systems were within the tolerance limit of 15%. In the oncological study, the estimation of 18F-FDG uptake in the tumor was significantly lower in the multi-bed system than that in the single-bed system. However, 18F-FDG PET in xenograft mice with tumor size > 4 mm revealed the variation between subjects within the multi-bed system group to be less than 12%. In the neurological study, SUV for the multi-bed group was 25–26% lower than that for the single-bed group; however, inter-object variations within the multi-bed system were below 7%. Conclusions: Although the multi-bed system showed lower estimation of radiotracer uptake than that of the single-bed system, the inter-subject variations were within acceptable limits. Our results indicate that the multi-bed system can be used in oncological and neurological studies. Full article
(This article belongs to the Special Issue SPECT and PET Imaging of Small Animals)
Show Figures

Figure 1

24 pages, 4397 KiB  
Article
Fall Detection System-Based Posture-Recognition for Indoor Environments
by Abderrazak Iazzi, Mohammed Rziza and Rachid Oulad Haj Thami
J. Imaging 2021, 7(3), 42; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030042 - 26 Feb 2021
Cited by 17 | Viewed by 3313
Abstract
The majority of the senior population lives alone at home. Falls can cause serious injuries, such as fractures or head injuries. These injuries can be an obstacle for a person to move around and normally practice his daily activities. Some of these injuries [...] Read more.
The majority of the senior population lives alone at home. Falls can cause serious injuries, such as fractures or head injuries. These injuries can be an obstacle for a person to move around and normally practice his daily activities. Some of these injuries can lead to a risk of death if not handled urgently. In this paper, we propose a fall detection system for elderly people based on their postures. The postures are recognized from the human silhouette which is an advantage to preserve the privacy of the elderly. The effectiveness of our approach is demonstrated on two well-known datasets for human posture classification and three public datasets for fall detection, using a Support-Vector Machine (SVM) classifier. The experimental results show that our method can not only achieves a high fall detection rate but also a low false detection. Full article
Show Figures

Figure 1

18 pages, 743 KiB  
Article
A Cortical-Inspired Sub-Riemannian Model for Poggendorff-Type Visual Illusions
by Emre Baspinar, Luca Calatroni, Valentina Franceschi and Dario Prandi
J. Imaging 2021, 7(3), 41; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030041 - 24 Feb 2021
Cited by 3 | Viewed by 2503
Abstract
We consider Wilson-Cowan-type models for the mathematical description of orientation-dependent Poggendorff-like illusions. Our modelling improves two previously proposed cortical-inspired approaches, embedding the sub-Riemannian heat kernel into the neuronal interaction term, in agreement with the intrinsically anisotropic functional architecture of V1 based on both [...] Read more.
We consider Wilson-Cowan-type models for the mathematical description of orientation-dependent Poggendorff-like illusions. Our modelling improves two previously proposed cortical-inspired approaches, embedding the sub-Riemannian heat kernel into the neuronal interaction term, in agreement with the intrinsically anisotropic functional architecture of V1 based on both local and lateral connections. For the numerical realisation of both models, we consider standard gradient descent algorithms combined with Fourier-based approaches for the efficient computation of the sub-Laplacian evolution. Our numerical results show that the use of the sub-Riemannian kernel allows us to reproduce numerically visual misperceptions and inpainting-type biases in a stronger way in comparison with the previous approaches. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop