sensors-logo

Journal Browser

Journal Browser

Machine Learning for Biomedical Imaging and Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (31 August 2020) | Viewed by 123659

Special Issue Editors


E-Mail Website
Guest Editor
Senior Scientist at Bioascent Drug Discovery, Newhouse, Lanarkshire ML1 5UH, UK
Interests: machine learning; imaging; drug discovery; data science; robotics and software development
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Co-Guest Editor
Usher Institute, University of Edinburgh, Teviot Place, Edinburgh EH8 9AG, UK
Interests: cancer molecular epidemiology studies

Special Issue Information

Dear Colleagues,

The use of machine learning techniques within the field of Biomedical Imaging & Sensing has risen in recent years. Applications within the literature have included diagnostics, image reconstruction, and the generation of synthetic human data. In terms of methodologies employed within this field, the last few years have seen the rise of deep learning techniques within this field, and the increasing use of novel methodologies such as generative adversarial networks and natural language generation. This field is also becoming increasingly accessible to researchers in Medicine and Biology who have not traditionally been machine learning practitioners, due to the availability of software libraries such as Keras and TensorFlow, and software packages such as WEKA.

The topics of interest include, but are not limited to, the following:

  • Classical machine learning techniques for image analysis
  • Machine learning and health outcomes
  • Machine learning for biomedical sensing
  • The generation of synthetic patient data
  • Quantitative image analysis
  • Deep learning and diagnosis
  • Biomedical image reconstruction
  • The generation of natural language descriptions of biomedical images
  • Computer aided diagnosis

Dr. Andy Taylor
Dr. Jonine Figueroa
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Machine learning
  • Image analysis
  • Generative adversarial networks
  • Deep learning
  • Sensing
  • Image reconstruction
  • Natural language generation
  • Artificial intelligence
  • Health informatics

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

14 pages, 1699 KiB  
Article
Thyroid Nodule Classification for Physician Decision Support Using Machine Learning-Evaluated Geometric and Morphological Features
by Elmer Jeto Gomes Ataide, Nikhila Ponugoti, Alfredo Illanes, Simone Schenke, Michael Kreissl and Michael Friebe
Sensors 2020, 20(21), 6110; https://0-doi-org.brum.beds.ac.uk/10.3390/s20216110 - 27 Oct 2020
Cited by 24 | Viewed by 3900
Abstract
The classification of thyroid nodules using ultrasound (US) imaging is done using the Thyroid Imaging Reporting and Data System (TIRADS) guidelines that classify nodules based on visual and textural characteristics. These are composition, shape, size, echogenicity, calcifications, margins, and vascularity. This work aims [...] Read more.
The classification of thyroid nodules using ultrasound (US) imaging is done using the Thyroid Imaging Reporting and Data System (TIRADS) guidelines that classify nodules based on visual and textural characteristics. These are composition, shape, size, echogenicity, calcifications, margins, and vascularity. This work aims to reduce subjectivity in the current diagnostic process by using geometric and morphological (G-M) features that represent the visual characteristics of thyroid nodules to provide physicians with decision support. A total of 27 G-M features were extracted from images obtained from an open-access US thyroid nodule image database. 11 significant features in accordance with TIRADS were selected from this global feature set. Each feature was labeled (0 = benign and 1 = malignant) and the performance of the selected features was evaluated using machine learning (ML). G-M features together with ML resulted in the classification of thyroid nodules with a high accuracy, sensitivity and specificity. The results obtained here were compared against state-of the-art methods and perform significantly well in comparison. Furthermore, this method can act as a computer aided diagnostic (CAD) system for physicians by providing them with a validation of the TIRADS visual characteristics used for the classification of thyroid nodules in US images. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

17 pages, 1686 KiB  
Article
Population Graph-Based Multi-Model Ensemble Method for Diagnosing Autism Spectrum Disorder
by Zarina Rakhimberdina, Xin Liu and Tsuyoshi Murata
Sensors 2020, 20(21), 6001; https://0-doi-org.brum.beds.ac.uk/10.3390/s20216001 - 22 Oct 2020
Cited by 24 | Viewed by 4071
Abstract
With the advancement of brain imaging techniques and a variety of machine learning methods, significant progress has been made in brain disorder diagnosis, in particular Autism Spectrum Disorder. The development of machine learning models that can differentiate between healthy subjects and patients is [...] Read more.
With the advancement of brain imaging techniques and a variety of machine learning methods, significant progress has been made in brain disorder diagnosis, in particular Autism Spectrum Disorder. The development of machine learning models that can differentiate between healthy subjects and patients is of great importance. Recently, graph neural networks have found increasing application in domains where the population’s structure is modeled as a graph. The application of graphs for analyzing brain imaging datasets helps to discover clusters of individuals with a specific diagnosis. However, the choice of the appropriate population graph becomes a challenge in practice, as no systematic way exists for defining it. To solve this problem, we propose a population graph-based multi-model ensemble, which improves the prediction, regardless of the choice of the underlying graph. First, we construct a set of population graphs using different combinations of imaging and phenotypic features and evaluate them using Graph Signal Processing tools. Subsequently, we utilize a neural network architecture to combine multiple graph-based models. The results demonstrate that the proposed model outperforms the state-of-the-art methods on Autism Brain Imaging Data Exchange (ABIDE) dataset. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

17 pages, 2304 KiB  
Article
Synthesis of Prostate MR Images for Classification Using Capsule Network-Based GAN Model
by Houqiang Yu and Xuming Zhang
Sensors 2020, 20(20), 5736; https://0-doi-org.brum.beds.ac.uk/10.3390/s20205736 - 09 Oct 2020
Cited by 12 | Viewed by 2242
Abstract
Prostate cancer remains a major health concern among elderly men. Deep learning is a state-of-the-art technique for MR image-based prostate cancer diagnosis, but one of major bottlenecks is the severe lack of annotated MR images. The traditional and Generative Adversarial Network (GAN)-based data [...] Read more.
Prostate cancer remains a major health concern among elderly men. Deep learning is a state-of-the-art technique for MR image-based prostate cancer diagnosis, but one of major bottlenecks is the severe lack of annotated MR images. The traditional and Generative Adversarial Network (GAN)-based data augmentation methods cannot ensure the quality and the diversity of generated training samples. In this paper, we have proposed a novel GAN model for synthesis of MR images by utilizing its powerful ability in modeling the complex data distributions. The proposed model is designed based on the architecture of deep convolutional GAN. To learn the more equivariant representation of images that is robust to the changes in the pose and spatial relationship of objects in the images, the capsule network is applied to replace CNN used in the discriminator of regular GAN. Meanwhile, the least squares loss has been adopted for both the generator and discriminator in the proposed GAN to address the vanishing gradient problem of sigmoid cross entropy loss function in regular GAN. Extensive experiments are conducted on the simulated and real MR images. The results demonstrate that the proposed capsule network-based GAN model can generate more realistic and higher quality MR images than the compared GANs. The quantitative comparisons show that among all evaluated models, the proposed GAN generally achieves the smallest Kullback–Leibler divergence values for image generation task and provides the best classification performance when it is introduced into the deep learning method for image classification task. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

19 pages, 3014 KiB  
Article
Deep Recurrent Neural Networks for Automatic Detection of Sleep Apnea from Single Channel Respiration Signals
by Hisham ElMoaqet, Mohammad Eid, Martin Glos, Mutaz Ryalat and Thomas Penzel
Sensors 2020, 20(18), 5037; https://0-doi-org.brum.beds.ac.uk/10.3390/s20185037 - 04 Sep 2020
Cited by 51 | Viewed by 6385
Abstract
Sleep apnea is a common sleep disorder that causes repeated breathing interruption during sleep. The performance of automated apnea detection methods based on respiratory signals depend on the signals considered and feature extraction methods. Moreover, feature engineering techniques are highly dependent on the [...] Read more.
Sleep apnea is a common sleep disorder that causes repeated breathing interruption during sleep. The performance of automated apnea detection methods based on respiratory signals depend on the signals considered and feature extraction methods. Moreover, feature engineering techniques are highly dependent on the experts’ experience and their prior knowledge about different physiological signals and conditions of the subjects. To overcome these problems, a novel deep recurrent neural network (RNN) framework is developed for automated feature extraction and detection of apnea events from single respiratory channel inputs. Long short-term memory (LSTM) and bidirectional long short-term memory (BiLSTM) are investigated to develop the proposed deep RNN model. The proposed framework is evaluated over three respiration signals: Oronasal thermal airflow (FlowTh), nasal pressure (NPRE), and abdominal respiratory inductance plethysmography (ABD). To demonstrate our results, we use polysomnography (PSG) data of 17 patients with obstructive, central, and mixed apnea events. Our results indicate the effectiveness of the proposed framework in automatic extraction for temporal features and automated detection of apneic events over the different respiratory signals considered in this study. Using a deep BiLSTM-based detection model, the NPRE signal achieved the highest overall detection results with true positive rate (sensitivity) = 90.3%, true negative rate (specificity) = 83.7%, and area under receiver operator characteristic curve = 92.4%. The present results contribute a new deep learning approach for automated detection of sleep apnea events from single channel respiration signals that can potentially serve as a helpful and alternative tool for the traditional PSG method. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

17 pages, 5764 KiB  
Article
Breast Cancer Histopathology Image Classification Using an Ensemble of Deep Learning Models
by Zabit Hameed, Sofia Zahia, Begonya Garcia-Zapirain, José Javier Aguirre and Ana María Vanegas
Sensors 2020, 20(16), 4373; https://0-doi-org.brum.beds.ac.uk/10.3390/s20164373 - 05 Aug 2020
Cited by 129 | Viewed by 10817
Abstract
Breast cancer is one of the major public health issues and is considered a leading cause of cancer-related deaths among women worldwide. Its early diagnosis can effectively help in increasing the chances of survival rate. To this end, biopsy is usually followed as [...] Read more.
Breast cancer is one of the major public health issues and is considered a leading cause of cancer-related deaths among women worldwide. Its early diagnosis can effectively help in increasing the chances of survival rate. To this end, biopsy is usually followed as a gold standard approach in which tissues are collected for microscopic analysis. However, the histopathological analysis of breast cancer is non-trivial, labor-intensive, and may lead to a high degree of disagreement among pathologists. Therefore, an automatic diagnostic system could assist pathologists to improve the effectiveness of diagnostic processes. This paper presents an ensemble deep learning approach for the definite classification of non-carcinoma and carcinoma breast cancer histopathology images using our collected dataset. We trained four different models based on pre-trained VGG16 and VGG19 architectures. Initially, we followed 5-fold cross-validation operations on all the individual models, namely, fully-trained VGG16, fine-tuned VGG16, fully-trained VGG19, and fine-tuned VGG19 models. Then, we followed an ensemble strategy by taking the average of predicted probabilities and found that the ensemble of fine-tuned VGG16 and fine-tuned VGG19 performed competitive classification performance, especially on the carcinoma class. The ensemble of fine-tuned VGG16 and VGG19 models offered sensitivity of 97.73% for carcinoma class and overall accuracy of 95.29%. Also, it offered an F1 score of 95.29%. These experimental results demonstrated that our proposed deep learning approach is effective for the automatic classification of complex-natured histopathology images of breast cancer, more specifically for carcinoma images. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

18 pages, 5764 KiB  
Article
A Novel Method to Identify Pneumonia through Analyzing Chest Radiographs Employing a Multichannel Convolutional Neural Network
by Abdullah-Al Nahid, Niloy Sikder, Anupam Kumar Bairagi, Md. Abdur Razzaque, Mehedi Masud, Abbas Z. Kouzani and M. A. Parvez Mahmud
Sensors 2020, 20(12), 3482; https://0-doi-org.brum.beds.ac.uk/10.3390/s20123482 - 19 Jun 2020
Cited by 23 | Viewed by 5866
Abstract
Pneumonia is a virulent disease that causes the death of millions of people around the world. Every year it kills more children than malaria, AIDS, and measles combined and it accounts for approximately one in five child-deaths worldwide. The invention of antibiotics and [...] Read more.
Pneumonia is a virulent disease that causes the death of millions of people around the world. Every year it kills more children than malaria, AIDS, and measles combined and it accounts for approximately one in five child-deaths worldwide. The invention of antibiotics and vaccines in the past century has notably increased the survival rate of Pneumonia patients. Currently, the primary challenge is to detect the disease at an early stage and determine its type to initiate the appropriate treatment. Usually, a trained physician or a radiologist undertakes the task of diagnosing Pneumonia by examining the patient’s chest X-ray. However, the number of such trained individuals is nominal when compared to the 450 million people who get affected by Pneumonia every year. Fortunately, this challenge can be met by introducing modern computers and improved Machine Learning techniques in Pneumonia diagnosis. Researchers have been trying to develop a method to automatically detect Pneumonia using machines by analyzing and the symptoms of the disease and chest radiographic images of the patients for the past two decades. However, with the development of cogent Deep Learning algorithms, the formation of such an automatic system is very much within the realms of possibility. In this paper, a novel diagnostic method has been proposed while using Image Processing and Deep Learning techniques that are based on chest X-ray images to detect Pneumonia. The method has been tested on a widely used chest radiography dataset, and the obtained results indicate that the model is very much potent to be employed in an automatic Pneumonia diagnosis scheme. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

22 pages, 7214 KiB  
Article
Comparison of Deep-Learning and Conventional Machine-Learning Methods for the Automatic Recognition of the Hepatocellular Carcinoma Areas from Ultrasound Images
by Raluca Brehar, Delia-Alexandrina Mitrea, Flaviu Vancea, Tiberiu Marita, Sergiu Nedevschi, Monica Lupsor-Platon, Magda Rotaru and Radu Ioan Badea
Sensors 2020, 20(11), 3085; https://0-doi-org.brum.beds.ac.uk/10.3390/s20113085 - 29 May 2020
Cited by 65 | Viewed by 6059
Abstract
The emergence of deep-learning methods in different computer vision tasks has proved to offer increased detection, recognition or segmentation accuracy when large annotated image datasets are available. In the case of medical image processing and computer-aided diagnosis within ultrasound images, where the amount [...] Read more.
The emergence of deep-learning methods in different computer vision tasks has proved to offer increased detection, recognition or segmentation accuracy when large annotated image datasets are available. In the case of medical image processing and computer-aided diagnosis within ultrasound images, where the amount of available annotated data is smaller, a natural question arises: are deep-learning methods better than conventional machine-learning methods? How do the conventional machine-learning methods behave in comparison with deep-learning methods on the same dataset? Based on the study of various deep-learning architectures, a lightweight multi-resolution Convolutional Neural Network (CNN) architecture is proposed. It is suitable for differentiating, within ultrasound images, between the Hepatocellular Carcinoma (HCC), respectively the cirrhotic parenchyma (PAR) on which HCC had evolved. The proposed deep-learning model is compared with other CNN architectures that have been adapted by transfer learning for the ultrasound binary classification task, but also with conventional machine-learning (ML) solutions trained on textural features. The achieved results show that the deep-learning approach overcomes classical machine-learning solutions, by providing a higher classification performance. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

15 pages, 8881 KiB  
Article
Integrating 3D Model Representation for an Accurate Non-Invasive Assessment of Pressure Injuries with Deep Learning
by Sofia Zahia, Begonya Garcia-Zapirain and Adel Elmaghraby
Sensors 2020, 20(10), 2933; https://0-doi-org.brum.beds.ac.uk/10.3390/s20102933 - 21 May 2020
Cited by 21 | Viewed by 4305
Abstract
Pressure injuries represent a major concern in many nations. These wounds result from prolonged pressure on the skin, which mainly occur among elderly and disabled patients. If retrieving quantitative information using invasive methods is the most used method, it causes significant pain and [...] Read more.
Pressure injuries represent a major concern in many nations. These wounds result from prolonged pressure on the skin, which mainly occur among elderly and disabled patients. If retrieving quantitative information using invasive methods is the most used method, it causes significant pain and discomfort to the patients and may also increase the risk of infections. Hence, developing non-intrusive methods for the assessment of pressure injuries would represent a highly useful tool for caregivers and a relief for patients. Traditional methods rely on findings retrieved solely from 2D images. Thus, bypassing the 3D information deriving from the deep and irregular shape of this type of wounds leads to biased measurements. In this paper, we propose an end-to-end system which uses a single 2D image and a 3D mesh of the pressure injury, acquired using the Structure Sensor, and outputs all the necessary findings such as: external segmentation of the wound as well as its real-world measurements (depth, area, volume, major axis and minor axis). More specifically, a first block composed of a Mask RCNN model uses the 2D image to output the segmentation of the external boundaries of the wound. Then, a second block matches the 2D and 3D views to segment the wound in the 3D mesh using the segmentation output and generates the aforementioned real-world measurements. Experimental results showed that the proposed framework can not only output refined segmentation with 87% precision, but also retrieves reliable measurements, which can be used for medical assessment and healing evaluation of pressure injuries. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

22 pages, 4541 KiB  
Article
A Strictly Unsupervised Deep Learning Method for HEp-2 Cell Image Classification
by Caleb Vununu, Suk-Hwan Lee and Ki-Ryong Kwon
Sensors 2020, 20(9), 2717; https://0-doi-org.brum.beds.ac.uk/10.3390/s20092717 - 09 May 2020
Cited by 12 | Viewed by 4035
Abstract
Classifying the images that portray the Human Epithelial cells of type 2 (HEp-2) represents one of the most important steps in the diagnosis procedure of autoimmune diseases. Performing this classification manually represents an extremely complicated task due to the heterogeneity of these cellular [...] Read more.
Classifying the images that portray the Human Epithelial cells of type 2 (HEp-2) represents one of the most important steps in the diagnosis procedure of autoimmune diseases. Performing this classification manually represents an extremely complicated task due to the heterogeneity of these cellular images. Hence, an automated classification scheme appears to be necessary. However, the majority of the available methods prefer to utilize the supervised learning approach for this problem. The need for thousands of images labelled manually can represent a difficulty with this approach. The first contribution of this work is to demonstrate that classifying HEp-2 cell images can also be done using the unsupervised learning paradigm. Unlike the majority of the existing methods, we propose here a deep learning scheme that performs both the feature extraction and the cells’ discrimination through an end-to-end unsupervised paradigm. We propose the use of a deep convolutional autoencoder (DCAE) that performs feature extraction via an encoding–decoding scheme. At the same time, we embed in the network a clustering layer whose purpose is to automatically discriminate, during the feature learning process, the latent representations produced by the DCAE. Furthermore, we investigate how the quality of the network’s reconstruction can affect the quality of the produced representations. We have investigated the effectiveness of our method on some benchmark datasets and we demonstrate here that the unsupervised learning, when done properly, performs at the same level as the actual supervised learning-based state-of-the-art methods in terms of accuracy. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

25 pages, 52876 KiB  
Article
Deep Learning–Based Methods for Automatic Diagnosis of Skin Lesions
by Hassan El-Khatib, Dan Popescu and Loretta Ichim
Sensors 2020, 20(6), 1753; https://0-doi-org.brum.beds.ac.uk/10.3390/s20061753 - 21 Mar 2020
Cited by 75 | Viewed by 10029
Abstract
The main purpose of the study was to develop a high accuracy system able to diagnose skin lesions using deep learning–based methods. We propose a new decision system based on multiple classifiers like neural networks and feature–based methods. Each classifier (method) gives the [...] Read more.
The main purpose of the study was to develop a high accuracy system able to diagnose skin lesions using deep learning–based methods. We propose a new decision system based on multiple classifiers like neural networks and feature–based methods. Each classifier (method) gives the final decision system a certain weight, depending on the calculated accuracy, helping the system make a better decision. First, we created a neural network (NN) that can differentiate melanoma from benign nevus. The NN architecture is analyzed by evaluating it during the training process. Some biostatistic parameters, such as accuracy, specificity, sensitivity, and Dice coefficient are calculated. Then, we developed three other methods based on convolutional neural networks (CNNs). The CNNs were pre-trained using large ImageNet and Places365 databases. GoogleNet, ResNet-101, and NasNet-Large, were used in the enumeration order. CNN architectures were fine-tuned in order to distinguish the different types of skin lesions using transfer learning. The accuracies of the classifications were determined. The last proposed method uses the classical method of image object detection, more precisely, the one in which some features are extracted from the images, followed by the classification step. In this case, the classification was done by using a support vector machine. Just as in the first method, the sensitivity, specificity, Dice similarity coefficient and accuracy are determined. A comparison of the obtained results from all the methods is then done. As mentioned above, the novelty of this paper is the integration of these methods in a global fusion-based decision system that uses the results obtained by each individual method to establish the fusion weights. The results obtained by carrying out the experiments on two different free databases shows that the proposed system offers higher accuracy results. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

15 pages, 1180 KiB  
Article
Stochastic Selection of Activation Layers for Convolutional Neural Networks
by Loris Nanni, Alessandra Lumini, Stefano Ghidoni and Gianluca Maguolo
Sensors 2020, 20(6), 1626; https://0-doi-org.brum.beds.ac.uk/10.3390/s20061626 - 14 Mar 2020
Cited by 26 | Viewed by 3822
Abstract
In recent years, the field of deep learning has achieved considerable success in pattern recognition, image segmentation, and many other classification fields. There are many studies and practical applications of deep learning on images, video, or text classification. Activation functions play a crucial [...] Read more.
In recent years, the field of deep learning has achieved considerable success in pattern recognition, image segmentation, and many other classification fields. There are many studies and practical applications of deep learning on images, video, or text classification. Activation functions play a crucial role in discriminative capabilities of the deep neural networks and the design of new “static” or “dynamic” activation functions is an active area of research. The main difference between “static” and “dynamic” functions is that the first class of activations considers all the neurons and layers as identical, while the second class learns parameters of the activation function independently for each layer or even each neuron. Although the “dynamic” activation functions perform better in some applications, the increased number of trainable parameters requires more computational time and can lead to overfitting. In this work, we propose a mixture of “static” and “dynamic” activation functions, which are stochastically selected at each layer. Our idea for model design is based on a method for changing some layers along the lines of different functional blocks of the best performing CNN models, with the aim of designing new models to be used as stand-alone networks or as a component of an ensemble. We propose to replace each activation layer of a CNN (usually a ReLU layer) by a different activation function stochastically drawn from a set of activation functions: in this way, the resulting CNN has a different set of activation function layers. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

15 pages, 2939 KiB  
Article
A Camera Sensors-Based System to Study Drug Effects on In Vitro Motility: The Case of PC-3 Prostate Cancer Cells
by Maria Colomba Comes, Arianna Mencattini, Davide Di Giuseppe, Joanna Filippi, Michele D’Orazio, Paola Casti, Francesca Corsi, Lina Ghibelli, Corrado Di Natale and Eugenio Martinelli
Sensors 2020, 20(5), 1531; https://0-doi-org.brum.beds.ac.uk/10.3390/s20051531 - 10 Mar 2020
Cited by 4 | Viewed by 2720
Abstract
Cell motility is the brilliant result of cell status and its interaction with close environments. Its detection is now possible, thanks to the synergy of high-resolution camera sensors, time-lapse microscopy devices, and dedicated software tools for video and data analysis. In this scenario, [...] Read more.
Cell motility is the brilliant result of cell status and its interaction with close environments. Its detection is now possible, thanks to the synergy of high-resolution camera sensors, time-lapse microscopy devices, and dedicated software tools for video and data analysis. In this scenario, we formulated a novel paradigm in which we considered the individual cells as a sort of sensitive element of a sensor, which exploits the camera as a transducer returning the movement of the cell as an output signal. In this way, cell movement allows us to retrieve information about the chemical composition of the close environment. To optimally exploit this information, in this work, we introduce a new setting, in which a cell trajectory is divided into sub-tracks, each one characterized by a specific motion kind. Hence, we considered all the sub-tracks of the single-cell trajectory as the signals of a virtual array of cell motility-based sensors. The kinematics of each sub-track is quantified and used for a classification task. To investigate the potential of the proposed approach, we have compared the achieved performances with those obtained by using a single-trajectory paradigm with the scope to evaluate the chemotherapy treatment effects on prostate cancer cells. Novel pattern recognition algorithms have been applied to the descriptors extracted at a sub-track level by implementing features, as well as samples selection (a good teacher learning approach) for model construction. The experimental results have put in evidence that the performances are higher when a further cluster majority role has been considered, by emulating a sort of sensor fusion procedure. All of these results highlighted the high strength of the proposed approach, and straightforwardly prefigure its use in lab-on-chip or organ-on-chip applications, where the cell motility analysis can be massively applied using time-lapse microscopy images. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

30 pages, 3158 KiB  
Article
Detecting Pneumonia Using Convolutions and Dynamic Capsule Routing for Chest X-ray Images
by Ansh Mittal, Deepika Kumar, Mamta Mittal, Tanzila Saba, Ibrahim Abunadi, Amjad Rehman and Sudipta Roy
Sensors 2020, 20(4), 1068; https://0-doi-org.brum.beds.ac.uk/10.3390/s20041068 - 15 Feb 2020
Cited by 101 | Viewed by 8107
Abstract
An entity’s existence in an image can be depicted by the activity instantiation vector from a group of neurons (called capsule). Recently, multi-layered capsules, called CapsNet, have proven to be state-of-the-art for image classification tasks. This research utilizes the prowess of this algorithm [...] Read more.
An entity’s existence in an image can be depicted by the activity instantiation vector from a group of neurons (called capsule). Recently, multi-layered capsules, called CapsNet, have proven to be state-of-the-art for image classification tasks. This research utilizes the prowess of this algorithm to detect pneumonia from chest X-ray (CXR) images. Here, an entity in the CXR image can help determine if the patient (whose CXR is used) is suffering from pneumonia or not. A simple model of capsules (also known as Simple CapsNet) has provided results comparable to best Deep Learning models that had been used earlier. Subsequently, a combination of convolutions and capsules is used to obtain two models that outperform all models previously proposed. These models—Integration of convolutions with capsules (ICC) and Ensemble of convolutions with capsules (ECC)—detect pneumonia with a test accuracy of 95.33% and 95.90%, respectively. The latter model is studied in detail to obtain a variant called EnCC, where n = 3, 4, 8, 16. Here, the E4CC model works optimally and gives test accuracy of 96.36%. All these models had been trained, validated, and tested on 5857 images from Mendeley. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

21 pages, 4224 KiB  
Article
Aging with Autism Departs Greatly from Typical Aging
by Elizabeth B. Torres, Carla Caballero and Sejal Mistry
Sensors 2020, 20(2), 572; https://0-doi-org.brum.beds.ac.uk/10.3390/s20020572 - 20 Jan 2020
Cited by 13 | Viewed by 8278
Abstract
Autism has been largely portrayed as a psychiatric and childhood disorder. However, autism is a lifelong neurological condition that evolves over time through highly heterogeneous trajectories. These trends have not been studied in relation to normative aging trajectories, so we know very little [...] Read more.
Autism has been largely portrayed as a psychiatric and childhood disorder. However, autism is a lifelong neurological condition that evolves over time through highly heterogeneous trajectories. These trends have not been studied in relation to normative aging trajectories, so we know very little about aging with autism. One aspect that seems to develop differently is the sense of movement, inclusive of sensory kinesthetic-reafference emerging from continuously sensed self-generated motions. These include involuntary micro-motions eluding observation, yet routinely obtainable in fMRI studies to rid images of motor artifacts. Open-access repositories offer thousands of imaging records, covering 5–65 years of age for both neurotypical and autistic individuals to ascertain the trajectories of involuntary motions. Here we introduce new computational techniques that automatically stratify different age groups in autism according to probability distance in different representational spaces. Further, we show that autistic cross-sectional population trajectories in probability space fundamentally differ from those of neurotypical controls and that after 40 years of age, there is an inflection point in autism, signaling a monotonically increasing difference away from age-matched normative involuntary motion signatures. Our work offers new age-appropriate stochastic analyses amenable to redefine basic research and provide dynamic diagnoses as the person’s nervous systems age. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Graphical abstract

21 pages, 1212 KiB  
Article
Hybrid Eye-Tracking on a Smartphone with CNN Feature Extraction and an Infrared 3D Model
by Braiden Brousseau, Jonathan Rose and Moshe Eizenman
Sensors 2020, 20(2), 543; https://0-doi-org.brum.beds.ac.uk/10.3390/s20020543 - 19 Jan 2020
Cited by 37 | Viewed by 6883
Abstract
This paper describes a low-cost, robust, and accurate remote eye-tracking system that uses an industrial prototype smartphone with integrated infrared illumination and camera. Numerous studies have demonstrated the beneficial use of eye-tracking in domains such as neurological and neuropsychiatric testing, advertising evaluation, pilot [...] Read more.
This paper describes a low-cost, robust, and accurate remote eye-tracking system that uses an industrial prototype smartphone with integrated infrared illumination and camera. Numerous studies have demonstrated the beneficial use of eye-tracking in domains such as neurological and neuropsychiatric testing, advertising evaluation, pilot training, and automotive safety. Remote eye-tracking on a smartphone could enable the significant growth in the deployment of applications in these domains. Our system uses a 3D gaze-estimation model that enables accurate point-of-gaze (PoG) estimation with free head and device motion. To accurately determine the input eye features (pupil center and corneal reflections), the system uses Convolutional Neural Networks (CNNs) together with a novel center-of-mass output layer. The use of CNNs improves the system’s robustness to the significant variability in the appearance of eye-images found in handheld eye trackers. The system was tested with 8 subjects with the device free to move in their hands and produced a gaze bias of 0.72°. Our hybrid approach that uses artificial illumination, a 3D gaze-estimation model, and a CNN feature extractor achieved an accuracy that is significantly (400%) better than current eye-tracking systems on smartphones that use natural illumination and machine-learning techniques to estimate the PoG. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

13 pages, 2078 KiB  
Article
A Pilot Study on Falling-Risk Detection Method Based on Postural Perturbation Evoked Potential Features
by Shenglong Jiang, Hongzhi Qi, Jie Zhang, Shufeng Zhang, Rui Xu, Yuan Liu, Lin Meng and Dong Ming
Sensors 2019, 19(24), 5554; https://0-doi-org.brum.beds.ac.uk/10.3390/s19245554 - 16 Dec 2019
Cited by 1 | Viewed by 2734
Abstract
In the human-robot hybrid system, due to the error recognition of the pattern recognition system, the robot may perform erroneous motor execution, which may lead to falling-risk. While, the human can clearly detect the existence of errors, which is manifested in the central [...] Read more.
In the human-robot hybrid system, due to the error recognition of the pattern recognition system, the robot may perform erroneous motor execution, which may lead to falling-risk. While, the human can clearly detect the existence of errors, which is manifested in the central nervous activity characteristics. To date, the majority of studies on falling-risk detection have focused primarily on computer vision and physical signals. There are no reports of falling-risk detection methods based on neural activity. In this study, we propose a novel method to monitor multi erroneous motion events using electroencephalogram (EEG) features. There were 15 subjects who participated in this study, who kept standing with an upper limb supported posture and received an unpredictable postural perturbation. EEG signal analysis revealed a high negative peak with a maximum averaged amplitude of −14.75 ± 5.99 μV, occurring at 62 ms after postural perturbation. The xDAWN algorithm was used to reduce the high-dimension of EEG signal features. And, Bayesian linear discriminant analysis (BLDA) was used to train a classifier. The detection rate of the falling-risk onset is 98.67%. And the detection latency is 334ms, when we set detection rate beyond 90% as the standard of dangerous event onset. Further analysis showed that the falling-risk detection method based on postural perturbation evoked potential features has a good generalization ability. The model based on typical event data achieved 94.2% detection rate for unlearned atypical perturbation events. This study demonstrated the feasibility of using neural response to detect dangerous fall events. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

13 pages, 4081 KiB  
Article
A Shallow Convolutional Learning Network for Classification of Cancers Based on Copy Number Variations
by Ahmad AlShibli and Hassan Mathkour
Sensors 2019, 19(19), 4207; https://0-doi-org.brum.beds.ac.uk/10.3390/s19194207 - 27 Sep 2019
Cited by 8 | Viewed by 2926
Abstract
Genomic copy number variations (CNVs) are among the most important structural variations. They are linked to several diseases and cancer types. Cancer is a leading cause of death worldwide. Several studies were conducted to investigate the causes of cancer and its association with [...] Read more.
Genomic copy number variations (CNVs) are among the most important structural variations. They are linked to several diseases and cancer types. Cancer is a leading cause of death worldwide. Several studies were conducted to investigate the causes of cancer and its association with genomic changes to enhance its management and improve the treatment opportunities. Classification of cancer types based on the CNVs falls in this category of research. We reviewed the recent, most successful methods that used machine learning algorithms to solve this problem and obtained a dataset that was tested by some of these methods for evaluation and comparison purposes. We propose three deep learning techniques to classify cancer types based on CNVs: a six-layer convolutional net (CNN6), residual six-layer convolutional net (ResCNN6), and transfer learning of pretrained VGG16 net. The results of the experiments performed on the data of six cancer types demonstrated a high accuracy of 86% for ResCNN6 followed by 85% for CNN6 and 77% for VGG16. The results revealed a lower prediction accuracy for one of the classes (uterine corpus endometrial carcinoma (UCEC)). Repeating the experiments after excluding this class reveals improvements in the accuracies: 91% for CNN6 and 92% for Res CNN6. We observed that UCEC and ovarian serous carcinoma (OV) share a considerable subset of their features, which causes a struggle for learning in the classifiers. We repeated the experiment again by balancing the six classes through oversampling of the training dataset and the result was an enhancement in both overall and UCEC classification accuracies. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

23 pages, 10107 KiB  
Article
Super-Resolution Reconstruction of Cell Pseudo-Color Image Based on Raman Technology
by Yifan Yang, Ming Zhu, Yuqing Wang, Hang Yang, Yanfeng Wu and Bei Li
Sensors 2019, 19(19), 4076; https://0-doi-org.brum.beds.ac.uk/10.3390/s19194076 - 20 Sep 2019
Cited by 5 | Viewed by 3072
Abstract
Raman spectroscopy visualization is a challenging task due to the interference of complex background noise and the number of selected measurement points. In this paper, a super-resolution image reconstruction algorithm for Raman spectroscopy is studied to convert raw Raman data into pseudo-color super-resolution [...] Read more.
Raman spectroscopy visualization is a challenging task due to the interference of complex background noise and the number of selected measurement points. In this paper, a super-resolution image reconstruction algorithm for Raman spectroscopy is studied to convert raw Raman data into pseudo-color super-resolution imaging. Firstly, the Raman spectrum data of a single measurement point is measured multiple times to calculate the mean value to remove the random background noise, and innovatively introduce the Retinex algorithm and the median filtering algorithm which improve the signal-to-noise ratio. The novel method of using deep neural network performs a super-resolution reconstruction operation on the gray image. An adaptive guided filter that automatically adjusts the filter radius and penalty factor is proposed to highlight the contour of the cell, and the super-resolution reconstruction of the pseudo-color image of the Raman spectrum is realized. The average signal-to-noise ratio of the reconstructed pseudo-color image sub-band reaches 14.29 db, and the average value of information entropy reaches 4.30 db. The results show that the Raman-based cell pseudo-color image super-resolution reconstruction algorithm is an effective tool to effectively remove noise and high-resolution visualization. The contrast experiments show that the pseudo-color image Kullback–Leiber (KL) entropy of the color image obtained by the method is small, the boundary is obvious, and the noise is small, which provide technical support for the development of sophisticated single-cell imaging Raman spectroscopy instruments. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

21 pages, 8710 KiB  
Article
Promising Generative Adversarial Network Based Sinogram Inpainting Method for Ultra-Limited-Angle Computed Tomography Imaging
by Ziheng Li, Ailong Cai, Linyuan Wang, Wenkun Zhang, Chao Tang, Lei Li, Ningning Liang and Bin Yan
Sensors 2019, 19(18), 3941; https://0-doi-org.brum.beds.ac.uk/10.3390/s19183941 - 12 Sep 2019
Cited by 40 | Viewed by 5699
Abstract
Limited-angle computed tomography (CT) image reconstruction is a challenging problem in the field of CT imaging. In some special applications, limited by the geometric space and mechanical structure of the imaging system, projections can only be collected with a scanning range of less [...] Read more.
Limited-angle computed tomography (CT) image reconstruction is a challenging problem in the field of CT imaging. In some special applications, limited by the geometric space and mechanical structure of the imaging system, projections can only be collected with a scanning range of less than 90°. We call this kind of serious limited-angle problem the ultra-limited-angle problem, which is difficult to effectively alleviate by traditional iterative reconstruction algorithms. With the development of deep learning, the generative adversarial network (GAN) performs well in image inpainting tasks and can add effective image information to restore missing parts of an image. In this study, given the characteristic of GAN to generate missing information, the sinogram-inpainting-GAN (SI-GAN) is proposed to restore missing sinogram data to suppress the singularity of the truncated sinogram for ultra-limited-angle reconstruction. We propose the U-Net generator and patch-design discriminator in SI-GAN to make the network suitable for standard medical CT images. Furthermore, we propose a joint projection domain and image domain loss function, in which the weighted image domain loss can be added by the back-projection operation. Then, by inputting a paired limited-angle/180° sinogram into the network for training, we can obtain the trained model, which has extracted the continuity feature of sinogram data. Finally, the classic CT reconstruction method is used to reconstruct the images after obtaining the estimated sinograms. The simulation studies and actual data experiments indicate that the proposed method performed well to reduce the serious artifacts caused by ultra-limited-angle scanning. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 4042 KiB  
Review
3D Deep Learning on Medical Images: A Review
by Satya P. Singh, Lipo Wang, Sukrit Gupta, Haveesh Goli, Parasuraman Padmanabhan and Balázs Gulyás
Sensors 2020, 20(18), 5097; https://0-doi-org.brum.beds.ac.uk/10.3390/s20185097 - 07 Sep 2020
Cited by 260 | Viewed by 19685
Abstract
The rapid advancements in machine learning, graphics processing technologies and the availability of medical imaging data have led to a rapid increase in the use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural [...] Read more.
The rapid advancements in machine learning, graphics processing technologies and the availability of medical imaging data have led to a rapid increase in the use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for the analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, we provide a brief mathematical description of 3D CNN and provide the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models in general) and possible future trends in the field. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)
Show Figures

Figure 1

Back to TopTop