Next Issue
Volume 7, January
Previous Issue
Volume 6, November
 
 

J. Imaging, Volume 6, Issue 12 (December 2020) – 16 articles

Cover Story (view full-size image): X-ray plenoptic cameras acquire multi-view X-ray transmission images in a single exposure (light field). Their development is challenging: designs have appeared only recently, and they are still affected by important limitations. Concurrently, the lack of available real X-ray light field data hinders dedicated algorithmic development. We present a physical emulation setup for rapidly exploring the parameter space of both existing and conceptual camera designs. This will assist and accelerate the design of X-ray plenoptic imaging solutions, and allow one to generate unlimited real X-ray plenoptic data. We also demonstrate that X-ray light fields enable the reconstruction of sharp spatial structures in three dimensions from single-shot data. This confirms that X-ray plenoptic cameras will lead to breakthroughs in a wide range of application areas. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
11 pages, 923 KiB  
Article
Deep Learning and Handcrafted Features for Virus Image Classification
by Loris Nanni, Eugenio De Luca, Marco Ludovico Facin and Gianluca Maguolo
J. Imaging 2020, 6(12), 143; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120143 - 21 Dec 2020
Cited by 26 | Viewed by 2866
Abstract
In this work, we present an ensemble of descriptors for the classification of virus images acquired using transmission electron microscopy. We trained multiple support vector machines on different sets of features extracted from the data. We used both handcrafted algorithms and a pretrained [...] Read more.
In this work, we present an ensemble of descriptors for the classification of virus images acquired using transmission electron microscopy. We trained multiple support vector machines on different sets of features extracted from the data. We used both handcrafted algorithms and a pretrained deep neural network as feature extractors. The proposed fusion strongly boosts the performance obtained by each stand-alone approach, obtaining state of the art performance. Full article
Show Figures

Figure 1

17 pages, 5606 KiB  
Article
High-Profile VRU Detection on Resource-Constrained Hardware Using YOLOv3/v4 on BDD100K
by Vicent Ortiz Castelló, Ismael Salvador Igual, Omar del Tejo Catalá and Juan-Carlos Perez-Cortes
J. Imaging 2020, 6(12), 142; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120142 - 19 Dec 2020
Cited by 10 | Viewed by 3565
Abstract
Vulnerable Road User (VRU) detection is a major application of object detection with the aim of helping reduce accidents in advanced driver-assistance systems and enabling the development of autonomous vehicles. Due to intrinsic complexity present in computer vision and to limitations in processing [...] Read more.
Vulnerable Road User (VRU) detection is a major application of object detection with the aim of helping reduce accidents in advanced driver-assistance systems and enabling the development of autonomous vehicles. Due to intrinsic complexity present in computer vision and to limitations in processing capacity and bandwidth, this task has not been completely solved nowadays. For these reasons, the well established YOLOv3 net and the new YOLOv4 one are assessed by training them on a huge, recent on-road image dataset (BDD100K), both for VRU and full on-road classes, with a great improvement in terms of detection quality when compared to their MS-COCO-trained generic correspondent models from the authors but with negligible costs in forward pass time. Additionally, some models were retrained when replacing the original Leaky ReLU convolutional activation functions from original YOLO implementation with two cutting-edge activation functions: the self-regularized non-monotonic function (MISH) and its self-gated counterpart (SWISH), with significant improvements with respect to the original activation function detection performance. Additionally, some trials were carried out including recent data augmentation techniques (mosaic and cutmix) and some grid size configurations, with cumulative improvements over the previous results, comprising different performance-throughput trade-offs. Full article
Show Figures

Figure 1

23 pages, 9083 KiB  
Article
Attention-Based Fully Gated CNN-BGRU for Russian Handwritten Text
by Abdelrahman Abdallah, Mohamed Hamada and Daniyar Nurseitov
J. Imaging 2020, 6(12), 141; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120141 - 18 Dec 2020
Cited by 26 | Viewed by 4107
Abstract
This article considers the task of handwritten text recognition using attention-based encoder–decoder networks trained in the Kazakh and Russian languages. We have developed a novel deep neural network model based on a fully gated CNN, supported by multiple bidirectional gated recurrent unit (BGRU) [...] Read more.
This article considers the task of handwritten text recognition using attention-based encoder–decoder networks trained in the Kazakh and Russian languages. We have developed a novel deep neural network model based on a fully gated CNN, supported by multiple bidirectional gated recurrent unit (BGRU) and attention mechanisms to manipulate sophisticated features that achieve 0.045 Character Error Rate (CER), 0.192 Word Error Rate (WER), and 0.253 Sequence Error Rate (SER) for the first test dataset and 0.064 CER, 0.24 WER and 0.361 SER for the second test dataset. Our proposed model is the first work to handle handwriting recognition models in Kazakh and Russian languages. Our results confirm the importance of our proposed Attention-Gated-CNN-BGRU approach for training handwriting text recognition and indicate that it can lead to statistically significant improvements (p-value < 0.05) in the sensitivity (recall) over the tests dataset. The proposed method’s performance was evaluated using handwritten text databases of three languages: English, Russian, and Kazakh. It demonstrates better results on the Handwritten Kazakh and Russian (HKR) dataset than the other well-known models. Full article
Show Figures

Figure 1

22 pages, 3483 KiB  
Article
The Empirical Watershed Wavelet
by Basile Hurat, Zariluz Alvarado and Jérôme Gilles
J. Imaging 2020, 6(12), 140; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120140 - 17 Dec 2020
Cited by 4 | Viewed by 2464
Abstract
The empirical wavelet transform is an adaptive multi-resolution analysis tool based on the idea of building filters on a data-driven partition of the Fourier domain. However, existing 2D extensions are constrained by the shape of the detected partitioning. In this paper, we provide [...] Read more.
The empirical wavelet transform is an adaptive multi-resolution analysis tool based on the idea of building filters on a data-driven partition of the Fourier domain. However, existing 2D extensions are constrained by the shape of the detected partitioning. In this paper, we provide theoretical results that permits us to build 2D empirical wavelet filters based on an arbitrary partitioning of the frequency domain. We also propose an algorithm to detect such partitioning from an image spectrum by combining a scale-space representation to estimate the position of dominant harmonic modes and a watershed transform to find the boundaries of the different supports making the expected partition. This whole process allows us to define the empirical watershed wavelet transform. We illustrate the effectiveness and the advantages of such adaptive transform, first visually on toy images, and next on both unsupervised texture segmentation and image deconvolution applications. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Journal of Imaging Editorial Board Members)
Show Figures

Figure 1

56 pages, 11483 KiB  
Review
A Survey on Anti-Spoofing Methods for Facial Recognition with RGB Cameras of Generic Consumer Devices
by Zuheng Ming, Muriel Visani, Muhammad Muzzamil Luqman and Jean-Christophe Burie
J. Imaging 2020, 6(12), 139; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120139 - 15 Dec 2020
Cited by 29 | Viewed by 6575
Abstract
The widespread deployment of facial recognition-based biometric systems has made facial presentation attack detection (face anti-spoofing) an increasingly critical issue. This survey thoroughly investigates facial Presentation Attack Detection (PAD) methods that only require RGB cameras of generic consumer devices over the past two [...] Read more.
The widespread deployment of facial recognition-based biometric systems has made facial presentation attack detection (face anti-spoofing) an increasingly critical issue. This survey thoroughly investigates facial Presentation Attack Detection (PAD) methods that only require RGB cameras of generic consumer devices over the past two decades. We present an attack scenario-oriented typology of the existing facial PAD methods, and we provide a review of over 50 of the most influenced facial PAD methods over the past two decades till today and their related issues. We adopt a comprehensive presentation of the reviewed facial PAD methods following the proposed typology and in chronological order. By doing so, we depict the main challenges, evolutions and current trends in the field of facial PAD and provide insights on its future research. From an experimental point of view, this survey paper provides a summarized overview of the available public databases and an extensive comparison of the results reported in PAD-reviewed papers. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

15 pages, 4935 KiB  
Article
Emulation of X-ray Light-Field Cameras
by Nicola Viganò, Felix Lucka, Ombeline de La Rochefoucauld, Sophia Bethany Coban, Robert van Liere, Marta Fajardo, Philippe Zeitoun and Kees Joost Batenburg
J. Imaging 2020, 6(12), 138; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120138 - 11 Dec 2020
Cited by 4 | Viewed by 2831
Abstract
X-ray plenoptic cameras acquire multi-view X-ray transmission images in a single exposure (light-field). Their development is challenging: designs have appeared only recently, and they are still affected by important limitations. Concurrently, the lack of available real X-ray light-field data hinders dedicated algorithmic development. [...] Read more.
X-ray plenoptic cameras acquire multi-view X-ray transmission images in a single exposure (light-field). Their development is challenging: designs have appeared only recently, and they are still affected by important limitations. Concurrently, the lack of available real X-ray light-field data hinders dedicated algorithmic development. Here, we present a physical emulation setup for rapidly exploring the parameter space of both existing and conceptual camera designs. This will assist and accelerate the design of X-ray plenoptic imaging solutions, and provide a tool for generating unlimited real X-ray plenoptic data. We also demonstrate that X-ray light-fields allow for reconstructing sharp spatial structures in three-dimensions (3D) from single-shot data. Full article
Show Figures

Figure 1

16 pages, 3701 KiB  
Article
Use of Very High Spatial Resolution Commercial Satellite Imagery and Deep Learning to Automatically Map Ice-Wedge Polygons across Tundra Vegetation Types
by Md Abul Ehsan Bhuiyan, Chandi Witharana and Anna K. Liljedahl
J. Imaging 2020, 6(12), 137; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120137 - 11 Dec 2020
Cited by 40 | Viewed by 4857
Abstract
We developed a high-throughput mapping workflow, which centers on deep learning (DL) convolutional neural network (CNN) algorithms on high-performance distributed computing resources, to automatically characterize ice-wedge polygons (IWPs) from sub-meter resolution commercial satellite imagery. We applied a region-based CNN object instance segmentation algorithm, [...] Read more.
We developed a high-throughput mapping workflow, which centers on deep learning (DL) convolutional neural network (CNN) algorithms on high-performance distributed computing resources, to automatically characterize ice-wedge polygons (IWPs) from sub-meter resolution commercial satellite imagery. We applied a region-based CNN object instance segmentation algorithm, namely the Mask R-CNN, to automatically detect and classify IWPs in North Slope of Alaska. The central goal of our study was to systematically expound the DLCNN model interoperability across varying tundra types (sedge, tussock sedge, and non-tussock sedge) and image scene complexities to refine the understanding of opportunities and challenges for regional-scale mapping applications. We corroborated quantitative error statistics along with detailed visual inspections to gauge the IWP detection accuracies. We found promising model performances (detection accuracies: 89% to 96% and classification accuracies: 94% to 97%) for all candidate image scenes with varying tundra types. The mapping workflow discerned the IWPs by exhibiting low absolute mean relative error (AMRE) values (0.17–0.23). Results further suggest the importance of increasing the variability of training samples when practicing transfer-learning strategy to map IWPs across heterogeneous tundra cover types. Overall, our findings demonstrate the robust performances of IWPs mapping workflow in multiple tundra landscapes. Full article
(This article belongs to the Special Issue Image Retrieval in Transfer Learning)
Show Figures

Figure 1

16 pages, 3957 KiB  
Article
4D Bragg Edge Tomography of Directional Ice Templated Graphite Electrodes
by Ralf F. Ziesche, Anton S. Tremsin, Chun Huang, Chun Tan, Patrick S. Grant, Malte Storm, Dan J. L. Brett, Paul R. Shearing and Winfried Kockelmann
J. Imaging 2020, 6(12), 136; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120136 - 11 Dec 2020
Cited by 9 | Viewed by 3035
Abstract
Bragg edge tomography was carried out on novel, ultra-thick, directional ice templated graphite electrodes for Li-ion battery cells to visualise the distribution of graphite and stable lithiation phases, namely LiC12 and LiC6. The four-dimensional Bragg edge, wavelength-resolved neutron tomography technique [...] Read more.
Bragg edge tomography was carried out on novel, ultra-thick, directional ice templated graphite electrodes for Li-ion battery cells to visualise the distribution of graphite and stable lithiation phases, namely LiC12 and LiC6. The four-dimensional Bragg edge, wavelength-resolved neutron tomography technique allowed the investigation of the crystallographic lithiation states and comparison with the electrode state of charge. The tomographic imaging technique provided insight into the crystallographic changes during de-/lithiation over the electrode thickness by mapping the attenuation curves and Bragg edge parameters with a spatial resolution of approximately 300 µm. This feasibility study was performed on the IMAT beamline at the ISIS pulsed neutron spallation source, UK, and was the first time the 4D Bragg edge tomography method was applied to Li-ion battery electrodes. The utility of the technique was further enhanced by correlation with corresponding X-ray tomography data obtained at the Diamond Light Source, UK. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

26 pages, 3307 KiB  
Article
A Computationally Efficient Reconstruction Algorithm for Circular Cone-Beam Computed Tomography Using Shallow Neural Networks
by Marinus J. Lagerwerf, Daniël M. Pelt, Willem Jan Palenstijn and Kees Joost Batenburg
J. Imaging 2020, 6(12), 135; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120135 - 08 Dec 2020
Cited by 8 | Viewed by 2725
Abstract
Circular cone-beam (CCB) Computed Tomography (CT) has become an integral part of industrial quality control, materials science and medical imaging. The need to acquire and process each scan in a short time naturally leads to trade-offs between speed and reconstruction quality, creating a [...] Read more.
Circular cone-beam (CCB) Computed Tomography (CT) has become an integral part of industrial quality control, materials science and medical imaging. The need to acquire and process each scan in a short time naturally leads to trade-offs between speed and reconstruction quality, creating a need for fast reconstruction algorithms capable of creating accurate reconstructions from limited data. In this paper, we introduce the Neural Network Feldkamp–Davis–Kress (NN-FDK) algorithm. This algorithm adds a machine learning component to the FDK algorithm to improve its reconstruction accuracy while maintaining its computational efficiency. Moreover, the NN-FDK algorithm is designed such that it has low training data requirements and is fast to train. This ensures that the proposed algorithm can be used to improve image quality in high-throughput CT scanning settings, where FDK is currently used to keep pace with the acquisition speed using readily available computational resources. We compare the NN-FDK algorithm to two standard CT reconstruction algorithms and to two popular deep neural networks trained to remove reconstruction artifacts from the 2D slices of an FDK reconstruction. We show that the NN-FDK reconstruction algorithm is substantially faster in computing a reconstruction than all the tested alternative methods except for the standard FDK algorithm and we show it can compute accurate CCB CT reconstructions in cases of high noise, a low number of projection angles or large cone angles. Moreover, we show that the training time of an NN-FDK network is orders of magnitude lower than the considered deep neural networks, with only a slight reduction in reconstruction accuracy. Full article
(This article belongs to the Special Issue Inverse Problems and Imaging)
Show Figures

Figure 1

12 pages, 659 KiB  
Article
Light Yield Response of Neutron Scintillation Screens to Sudden Flux Changes
by Tobias Neuwirth, Bernhard Walfort, Simon Sebold and Michael Schulz
J. Imaging 2020, 6(12), 134; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120134 - 05 Dec 2020
Viewed by 2433
Abstract
We performed a study of the initial and long term light yield of different scintillation screen mixtures for neutron imaging during constant neutron irradiation. We evaluated the light yield during different neutron flux levels as well as at different temperatures. As high frame [...] Read more.
We performed a study of the initial and long term light yield of different scintillation screen mixtures for neutron imaging during constant neutron irradiation. We evaluated the light yield during different neutron flux levels as well as at different temperatures. As high frame rate imaging is a topic of interest in the neutron imaging community, the decay characteristics of scintillation screens are of interest as well. Hence, we also present and discuss the decay behavior of the different scintillation screen mixtures on a time scale of seconds. We have found that the decay time of ZnS:Cu/6LiF excited with a high neutron flux is potentially much longer than typically stated. While most of the tested scintillation screens do not provide a significant improvement over currently used scintillation screen materials, Zn(Cd)S:Ag/6LiF seems to be a good candidate for high frame rate imaging due to its high light yield, long-term stability as well as fast decay compared to the other evaluated scintillation screens. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

17 pages, 1603 KiB  
Article
3D Non-Local Neural Network: A Non-Invasive Biomarker for Immunotherapy Treatment Outcome Prediction. Case-Study: Metastatic Urothelial Carcinoma
by Francesco Rundo, Giuseppe Luigi Banna, Luca Prezzavento, Francesca Trenta, Sabrina Conoci and Sebastiano Battiato
J. Imaging 2020, 6(12), 133; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120133 - 03 Dec 2020
Cited by 10 | Viewed by 2433
Abstract
Immunotherapy is regarded as one of the most significant breakthroughs in cancer treatment. Unfortunately, only a small percentage of patients respond properly to the treatment. Moreover, to date, there are no efficient bio-markers able to early discriminate the patients eligible for this treatment. [...] Read more.
Immunotherapy is regarded as one of the most significant breakthroughs in cancer treatment. Unfortunately, only a small percentage of patients respond properly to the treatment. Moreover, to date, there are no efficient bio-markers able to early discriminate the patients eligible for this treatment. In order to help overcome these limitations, an innovative non-invasive deep pipeline, integrating Computed Tomography (CT) imaging, is investigated for the prediction of a response to immunotherapy treatment. We report preliminary results collected as part of a case study in which we validated the implemented method on a clinical dataset of patients affected by Metastatic Urothelial Carcinoma. The proposed pipeline aims to discriminate patients with high chances of response from those with disease progression. Specifically, the authors propose ad-hoc 3D Deep Networks integrating Self-Attention mechanisms in order to estimate the immunotherapy treatment response from CT-scan images and such hemato-chemical data of the patients. The performance evaluation (average accuracy close to 92%) confirms the effectiveness of the proposed approach as an immunotherapy treatment response biomarker. Full article
(This article belongs to the Special Issue Advanced Computational Methods for Oncological Image Analysis)
Show Figures

Graphical abstract

28 pages, 7138 KiB  
Article
Task-Driven Learned Hyperspectral Data Reduction Using End-to-End Supervised Deep Learning
by Mathé T. Zeegers, Daniël M. Pelt, Tristan van Leeuwen, Robert van Liere and Kees Joost Batenburg
J. Imaging 2020, 6(12), 132; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120132 - 02 Dec 2020
Cited by 7 | Viewed by 4604
Abstract
An important challenge in hyperspectral imaging tasks is to cope with the large number of spectral bins. Common spectral data reduction methods do not take prior knowledge about the task into account. Consequently, sparsely occurring features that may be essential for the imaging [...] Read more.
An important challenge in hyperspectral imaging tasks is to cope with the large number of spectral bins. Common spectral data reduction methods do not take prior knowledge about the task into account. Consequently, sparsely occurring features that may be essential for the imaging task may not be preserved in the data reduction step. Convolutional neural network (CNN) approaches are capable of learning the specific features relevant to the particular imaging task, but applying them directly to the spectral input data is constrained by the computational efficiency. We propose a novel supervised deep learning approach for combining data reduction and image analysis in an end-to-end architecture. In our approach, the neural network component that performs the reduction is trained such that image features most relevant for the task are preserved in the reduction step. Results for two convolutional neural network architectures and two types of generated datasets show that the proposed Data Reduction CNN (DRCNN) approach can produce more accurate results than existing popular data reduction methods, and can be used in a wide range of problem settings. The integration of knowledge about the task allows for more image compression and higher accuracies compared to standard data reduction methods. Full article
(This article belongs to the Special Issue Advances in Image Feature Extraction and Selection)
Show Figures

Graphical abstract

38 pages, 2866 KiB  
Review
A Survey of Deep Learning for Lung Disease Detection on Medical Images: State-of-the-Art, Taxonomy, Issues and Future Directions
by Stefanus Tao Hwa Kieu, Abdullah Bade, Mohd Hanafi Ahmad Hijazi and Hoshang Kolivand
J. Imaging 2020, 6(12), 131; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120131 - 01 Dec 2020
Cited by 67 | Viewed by 10921
Abstract
The recent developments of deep learning support the identification and classification of lung diseases in medical images. Hence, numerous work on the detection of lung disease using deep learning can be found in the literature. This paper presents a survey of deep learning [...] Read more.
The recent developments of deep learning support the identification and classification of lung diseases in medical images. Hence, numerous work on the detection of lung disease using deep learning can be found in the literature. This paper presents a survey of deep learning for lung disease detection in medical images. There has only been one survey paper published in the last five years regarding deep learning directed at lung diseases detection. However, their survey is lacking in the presentation of taxonomy and analysis of the trend of recent work. The objectives of this paper are to present a taxonomy of the state-of-the-art deep learning based lung disease detection systems, visualise the trends of recent work on the domain and identify the remaining issues and potential future directions in this domain. Ninety-eight articles published from 2016 to 2020 were considered in this survey. The taxonomy consists of seven attributes that are common in the surveyed articles: image types, features, data augmentation, types of deep learning algorithms, transfer learning, the ensemble of classifiers and types of lung diseases. The presented taxonomy could be used by other researchers to plan their research contributions and activities. The potential future direction suggested could further improve the efficiency and increase the number of deep learning aided lung disease detection applications. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Show Figures

Figure 1

20 pages, 21639 KiB  
Article
FACS-Based Graph Features for Real-Time Micro-Expression Recognition
by Adamu Muhammad Buhari, Chee-Pun Ooi, Vishnu Monn Baskaran, Raphaël C. W. Phan, KokSheik Wong and Wooi-Haw Tan
J. Imaging 2020, 6(12), 130; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120130 - 30 Nov 2020
Cited by 11 | Viewed by 3807
Abstract
Several studies on micro-expression recognition have contributed mainly to accuracy improvement. However, the computational complexity receives lesser attention comparatively and therefore increases the cost of micro-expression recognition for real-time application. In addition, majority of the existing approaches required at least two frames (i.e., [...] Read more.
Several studies on micro-expression recognition have contributed mainly to accuracy improvement. However, the computational complexity receives lesser attention comparatively and therefore increases the cost of micro-expression recognition for real-time application. In addition, majority of the existing approaches required at least two frames (i.e., onset and apex frames) to compute features of every sample. This paper puts forward new facial graph features based on 68-point landmarks using Facial Action Coding System (FACS). The proposed feature extraction technique (FACS-based graph features) utilizes facial landmark points to compute graph for different Action Units (AUs), where the measured distance and gradient of every segment within an AU graph is presented as feature. Moreover, the proposed technique processes ME recognition based on single input frame sample. Results indicate that the proposed FACS-baed graph features achieve up to 87.33% of recognition accuracy with F1-score of 0.87 using leave one subject out cross-validation on SAMM datasets. Besides, the proposed technique computes features at the speed of 2 ms per sample on Xeon Processor E5-2650 machine. Full article
(This article belongs to the Special Issue Imaging Studies for Face and Gesture Analysis)
Show Figures

Figure 1

15 pages, 311 KiB  
Article
Bucket of Deep Transfer Learning Features and Classification Models for Melanoma Detection
by Mario Manzo and Simone Pellino
J. Imaging 2020, 6(12), 129; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120129 - 26 Nov 2020
Cited by 19 | Viewed by 2542
Abstract
Malignant melanoma is the deadliest form of skin cancer and, in recent years, is rapidly growing in terms of the incidence worldwide rate. The most effective approach to targeted treatment is early diagnosis. Deep learning algorithms, specifically convolutional neural networks, represent a methodology [...] Read more.
Malignant melanoma is the deadliest form of skin cancer and, in recent years, is rapidly growing in terms of the incidence worldwide rate. The most effective approach to targeted treatment is early diagnosis. Deep learning algorithms, specifically convolutional neural networks, represent a methodology for the image analysis and representation. They optimize the features design task, essential for an automatic approach on different types of images, including medical. In this paper, we adopted pretrained deep convolutional neural networks architectures for the image representation with purpose to predict skin lesion melanoma. Firstly, we applied a transfer learning approach to extract image features. Secondly, we adopted the transferred learning features inside an ensemble classification context. Specifically, the framework trains individual classifiers on balanced subspaces and combines the provided predictions through statistical measures. Experimental phase on datasets of skin lesion images is performed and results obtained show the effectiveness of the proposed approach with respect to state-of-the-art competitors. Full article
(This article belongs to the Special Issue Advanced Computational Methods for Oncological Image Analysis)
Show Figures

Figure 1

11 pages, 5704 KiB  
Article
A Fast Neutron Radiography System Using a High Yield Portable DT Neutron Source
by David L. Williams, Craig M. Brown, David Tong, Alexander Sulyman and Charles K. Gary
J. Imaging 2020, 6(12), 128; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120128 - 26 Nov 2020
Cited by 8 | Viewed by 2593
Abstract
Resolution measurements were made using 14.1 MeV neutrons from a high-yield, portable DT neutron generator and a neutron camera based on a scintillation screen viewed by a digital camera. Resolution measurements were made using a custom-built, plastic, USAF-1951 resolution chart, of dimensions 125 [...] Read more.
Resolution measurements were made using 14.1 MeV neutrons from a high-yield, portable DT neutron generator and a neutron camera based on a scintillation screen viewed by a digital camera. Resolution measurements were made using a custom-built, plastic, USAF-1951 resolution chart, of dimensions 125 × 98 × 25.4 mm3, and by calculating the modulation transfer function from the edge-spread function from edges of plastic and steel objects. A portable neutron generator with a yield of 3 × 109 n/s (DT) and a spot size of 1.5 mm was used to irradiate the object with neutrons for 10 min. The neutron camera, based on a 6LiF/ZnS:Cu-doped polypropylene scintillation screen and digital camera was placed at a distance of 140 cm, and produced an image with a spatial resolution of 0.35 cycles per millimeter. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop