sensors-logo

Journal Browser

Journal Browser

Machine Learning Techniques for Medical Imaging, Sensing, and Analysis

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (20 August 2022) | Viewed by 27625

Special Issue Editors


E-Mail Website
Guest Editor
Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 333, Korea
Interests: medical image analysis; computer vision; machine learning

E-Mail Website
Guest Editor
Artificial Intelligence Institute, Shanghai Jiao Tong University, Shanghai 200240, China
Interests: multimodal brain image computing and analysis; machine learning; image segmentation

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Marquette University, Milwaukee, WI 53233, USA
Interests: medical image analysis; machine learning; computational imaging

Special Issue Information

Dear Colleagues,

With the recent advancements in artificial intelligence, technologies for monitoring or diagnosing patients are also making great progress. However, there are still many unsolved challenges stemming from the lack of sufficient data, difficulty in labeling, and the heterogeneity of medical data. The goal of this Special Issue is to publish original manuscripts and the latest research regarding machine learning techniques that can be used for sensing or imaging bio-signals and predicting diagnosis or prognosis from medical images or signals. Towards this goal, we welcome submissions describing new cutting-edge machine learning and deep learning methods that address existing problems in the biomedical and health informatics fields. 

Topics of interest include but are not limited to:

- Machine learning algorithms for medical image analysis;

- Disease classification and prognosis prediction;

- Segmentation and anomaly detection;

- Medical image generation and quality improvement;

- Medical image registration;

- Signal processing for predicting patient intentions or monitoring;

- Big or multi-modality data processing for predicting clinical outcomes.

Prof. Dr. Sang Hyun Park
Prof. Dr. Manhua Liu
Dr. Dong Hye Ye
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 1390 KiB  
Article
FDG-PET to T1 Weighted MRI Translation with 3D Elicit Generative Adversarial Network (E-GAN)
by Farideh Bazangani, Frédéric J. P. Richard, Badih Ghattas and Eric Guedj
Sensors 2022, 22(12), 4640; https://0-doi-org.brum.beds.ac.uk/10.3390/s22124640 - 20 Jun 2022
Cited by 7 | Viewed by 2457
Abstract
Objective: With the strengths of deep learning, computer-aided diagnosis (CAD) is a hot topic for researchers in medical image analysis. One of the main requirements for training a deep learning model is providing enough data for the network. However, in medical images, due [...] Read more.
Objective: With the strengths of deep learning, computer-aided diagnosis (CAD) is a hot topic for researchers in medical image analysis. One of the main requirements for training a deep learning model is providing enough data for the network. However, in medical images, due to the difficulties of data collection and data privacy, finding an appropriate dataset (balanced, enough samples, etc.) is quite a challenge. Although image synthesis could be beneficial to overcome this issue, synthesizing 3D images is a hard task. The main objective of this paper is to generate 3D T1 weighted MRI corresponding to FDG-PET. In this study, we propose a separable convolution-based Elicit generative adversarial network (E-GAN). The proposed architecture can reconstruct 3D T1 weighted MRI from 2D high-level features and geometrical information retrieved from a Sobel filter. Experimental results on the ADNI datasets for healthy subjects show that the proposed model improves the quality of images compared with the state of the art. In addition, the evaluation of E-GAN and the state of art methods gives a better result on the structural information (13.73% improvement for PSNR and 22.95% for SSIM compared to Pix2Pix GAN) and textural information (6.9% improvements for homogeneity error in Haralick features compared to Pix2Pix GAN). Full article
Show Figures

Figure 1

15 pages, 960 KiB  
Article
Cost-Sensitive Learning for Anomaly Detection in Imbalanced ECG Data Using Convolutional Neural Networks
by Muhammad Zubair and Changwoo Yoon
Sensors 2022, 22(11), 4075; https://0-doi-org.brum.beds.ac.uk/10.3390/s22114075 - 27 May 2022
Cited by 7 | Viewed by 1920
Abstract
Arrhythmia detection algorithms based on deep learning are attracting considerable interest due to their vital role in the diagnosis of cardiac abnormalities. Despite this interest, deep feature representation for ECG is still challenging and intriguing due to the inter-patient variability of the ECG’s [...] Read more.
Arrhythmia detection algorithms based on deep learning are attracting considerable interest due to their vital role in the diagnosis of cardiac abnormalities. Despite this interest, deep feature representation for ECG is still challenging and intriguing due to the inter-patient variability of the ECG’s morphological characteristics. The aim of this study was to learn a balanced deep feature representation that incorporates both the short-term and long-term morphological characteristics of ECG beats. For efficient feature extraction, we designed a temporal transition module that uses convolutional layers with different kernel sizes to capture a wide range of morphological patterns. Imbalanced data are a key issue in developing an efficient and generalized model for arrhythmia detection as they cause over-fitting to minority class samples (abnormal beats) of primary interest. To mitigate the imbalanced data issue, we proposed a novel, cost-sensitive loss function that ensures a balanced deep representation of class samples by assigning effective weights to each class. The cost-sensitive loss function dynamically alters class weights for every batch based on class distribution and model performance. The proposed method acquired an overall accuracy of 99.81% for intra-patient classification and 96.36% for the inter-patient classification of heartbeats. The experimental results reveal that the proposed approach learned a balanced representation of ECG beats by mitigating the issue of imbalanced data and achieved an improved classification performance as compared to other studies. Full article
Show Figures

Figure 1

15 pages, 3959 KiB  
Article
Multi-Conditional Constraint Generative Adversarial Network-Based MR Imaging from CT Scan Data
by Mingjie Liu, Wei Zou, Wentao Wang, Cheng-Bin Jin, Junsheng Chen and Changhao Piao
Sensors 2022, 22(11), 4043; https://0-doi-org.brum.beds.ac.uk/10.3390/s22114043 - 26 May 2022
Cited by 2 | Viewed by 1745
Abstract
Magnetic resonance (MR) imaging is an important computer-aided diagnosis technique with rich pathological information. The factor of physical and physiological constraint seriously affects the applicability of that technique. Thus, computed tomography (CT)-based radiotherapy is more popular on account of its imaging rapidity and [...] Read more.
Magnetic resonance (MR) imaging is an important computer-aided diagnosis technique with rich pathological information. The factor of physical and physiological constraint seriously affects the applicability of that technique. Thus, computed tomography (CT)-based radiotherapy is more popular on account of its imaging rapidity and environmental simplicity. Therefore, it is of great theoretical and practical significance to design a method that can construct an MR image from the corresponding CT image. In this paper, we treat MR imaging as a machine vision problem and propose a multi-conditional constraint generative adversarial network (GAN) for MR imaging from CT scan data. Considering reversibility of GAN, both generator and reverse generator are designed for MR and CT imaging, respectively, which can constrain each other and improve consistency between features of CT and MR images. In addition, we innovatively treat the real and generated MR image discrimination as object re-identification; cosine error fusing with original GAN loss is designed to enhance verisimilitude and textural features of the MR image. The experimental results with the challenging public CT-MR image dataset show distinct performance improvement over other GANs utilized in medical imaging and demonstrate the effect of our method for medical image modal transformation. Full article
Show Figures

Figure 1

14 pages, 16940 KiB  
Article
Multi-Domain Neumann Network with Sensitivity Maps for Parallel MRI Reconstruction
by Jun-Hyeok Lee, Junghwa Kang, Se-Hong Oh and Dong Hye Ye
Sensors 2022, 22(10), 3943; https://0-doi-org.brum.beds.ac.uk/10.3390/s22103943 - 23 May 2022
Cited by 4 | Viewed by 2131
Abstract
MRI is an imaging technology that non-invasively obtains high-quality medical images for diagnosis. However, MRI has the major disadvantage of long scan times which cause patient discomfort and image artifacts. As one of the methods for reducing the long scan time of MRI, [...] Read more.
MRI is an imaging technology that non-invasively obtains high-quality medical images for diagnosis. However, MRI has the major disadvantage of long scan times which cause patient discomfort and image artifacts. As one of the methods for reducing the long scan time of MRI, the parallel MRI method for reconstructing a high-fidelity MR image from under-sampled multi-coil k-space data is widely used. In this study, we propose a method to reconstruct a high-fidelity MR image from under-sampled multi-coil k-space data using deep-learning. The proposed multi-domain Neumann network with sensitivity maps (MDNNSM) is based on the Neumann network and uses a forward model including coil sensitivity maps for parallel MRI reconstruction. The MDNNSM consists of three main structures: the CNN-based sensitivity reconstruction block estimates coil sensitivity maps from multi-coil under-sampled k-space data; the recursive MR image reconstruction block reconstructs the MR image; and the skip connection accumulates each output and produces the final result. Experiments using the fastMRI T1-weighted brain image dataset were conducted at acceleration factors of 2, 4, and 8. Qualitative and quantitative experimental results show that the proposed MDNNSM method reconstructs MR images more accurately than other methods, including the generalized autocalibrating partially parallel acquisitions (GRAPPA) method and the original Neumann network. Full article
Show Figures

Figure 1

12 pages, 1841 KiB  
Article
Method to Minimize the Errors of AI: Quantifying and Exploiting Uncertainty of Deep Learning in Brain Tumor Segmentation
by Joohyun Lee, Dongmyung Shin, Se-Hong Oh and Haejin Kim
Sensors 2022, 22(6), 2406; https://0-doi-org.brum.beds.ac.uk/10.3390/s22062406 - 21 Mar 2022
Cited by 6 | Viewed by 2425
Abstract
Despite the unprecedented success of deep learning in various fields, it has been recognized that clinical diagnosis requires extra caution when applying recent deep learning techniques because false prediction can result in severe consequences. In this study, we proposed a reliable deep learning [...] Read more.
Despite the unprecedented success of deep learning in various fields, it has been recognized that clinical diagnosis requires extra caution when applying recent deep learning techniques because false prediction can result in severe consequences. In this study, we proposed a reliable deep learning framework that could minimize incorrect segmentation by quantifying and exploiting uncertainty measures. The proposed framework demonstrated the effectiveness of a public dataset: Multimodal Brain Tumor Segmentation Challenge 2018. By using this framework, segmentation performances, particularly for small lesions, were improved. Since the segmentation of small lesions is difficult but also clinically significant, this framework could be effectively applied to the medical imaging field. Full article
Show Figures

Figure 1

13 pages, 2570 KiB  
Article
Diagnosis of Esophageal Lesions by Multi-Classification and Segmentation Using an Improved Multi-Task Deep Learning Model
by Suigu Tang, Xiaoyuan Yu, Chak-Fong Cheang, Zeming Hu, Tong Fang, I-Cheong Choi and Hon-Ho Yu
Sensors 2022, 22(4), 1492; https://0-doi-org.brum.beds.ac.uk/10.3390/s22041492 - 15 Feb 2022
Cited by 9 | Viewed by 1923
Abstract
It is challenging for endoscopists to accurately detect esophageal lesions during gastrointestinal endoscopic screening due to visual similarities among different lesions in terms of shape, size, and texture among patients. Additionally, endoscopists are busy fighting esophageal lesions every day, hence the need to [...] Read more.
It is challenging for endoscopists to accurately detect esophageal lesions during gastrointestinal endoscopic screening due to visual similarities among different lesions in terms of shape, size, and texture among patients. Additionally, endoscopists are busy fighting esophageal lesions every day, hence the need to develop a computer-aided diagnostic tool to classify and segment the lesions at endoscopic images to reduce their burden. Therefore, we propose a multi-task classification and segmentation (MTCS) model, including the Esophageal Lesions Classification Network (ELCNet) and Esophageal Lesions Segmentation Network (ELSNet). The ELCNet was used to classify types of esophageal lesions, and the ELSNet was used to identify lesion regions. We created a dataset by collecting 805 esophageal images from 255 patients and 198 images from 64 patients to train and evaluate the MTCS model. Compared with other methods, the proposed not only achieved a high accuracy (93.43%) in classification but achieved a dice similarity coefficient (77.84%) in segmentation. In conclusion, the MTCS model can boost the performance of endoscopists in the detection of esophageal lesions as it can accurately multi-classify and segment the lesions and is a potential assistant for endoscopists to reduce the risk of oversight. Full article
Show Figures

Figure 1

18 pages, 563 KiB  
Article
Deep Learning Techniques in the Classification of ECG Signals Using R-Peak Detection Based on the PTB-XL Dataset
by Sandra Śmigiel, Krzysztof Pałczyński and Damian Ledziński
Sensors 2021, 21(24), 8174; https://0-doi-org.brum.beds.ac.uk/10.3390/s21248174 - 07 Dec 2021
Cited by 23 | Viewed by 5342
Abstract
Deep Neural Networks (DNNs) are state-of-the-art machine learning algorithms, the application of which in electrocardiographic signals is gaining importance. So far, limited studies or optimizations using DNN can be found using ECG databases. To explore and achieve effective ECG recognition, this paper presents [...] Read more.
Deep Neural Networks (DNNs) are state-of-the-art machine learning algorithms, the application of which in electrocardiographic signals is gaining importance. So far, limited studies or optimizations using DNN can be found using ECG databases. To explore and achieve effective ECG recognition, this paper presents a convolutional neural network to perform the encoding of a single QRS complex with the addition of entropy-based features. This study aims to determine what combination of signal information provides the best result for classification purposes. The analyzed information included the raw ECG signal, entropy-based features computed from raw ECG signals, extracted QRS complexes, and entropy-based features computed from extracted QRS complexes. The tests were based on the classification of 2, 5, and 20 classes of heart diseases. The research was carried out on the data contained in a PTB-XL database. An innovative method of extracting QRS complexes based on the aggregation of results from established algorithms for multi-lead signals using the k-mean method, at the same time, was presented. The obtained results prove that adding entropy-based features and extracted QRS complexes to the raw signal is beneficial. Raw signals with entropy-based features but without extracted QRS complexes performed much worse. Full article
Show Figures

Figure 1

11 pages, 1939 KiB  
Communication
Deep Convolution Neural Network for Laryngeal Cancer Classification on Contact Endoscopy-Narrow Band Imaging
by Nazila Esmaeili, Esam Sharaf, Elmer Jeto Gomes Ataide, Alfredo Illanes, Axel Boese, Nikolaos Davaris, Christoph Arens, Nassir Navab and Michael Friebe
Sensors 2021, 21(23), 8157; https://0-doi-org.brum.beds.ac.uk/10.3390/s21238157 - 06 Dec 2021
Cited by 11 | Viewed by 2386
Abstract
(1) Background: Contact Endoscopy (CE) and Narrow Band Imaging (NBI) are optical imaging modalities that can provide enhanced and magnified visualization of the superficial vascular networks in the laryngeal mucosa. The similarity of vascular structures between benign and malignant lesions causes a challenge [...] Read more.
(1) Background: Contact Endoscopy (CE) and Narrow Band Imaging (NBI) are optical imaging modalities that can provide enhanced and magnified visualization of the superficial vascular networks in the laryngeal mucosa. The similarity of vascular structures between benign and malignant lesions causes a challenge in the visual assessment of CE-NBI images. The main objective of this study is to use Deep Convolutional Neural Networks (DCNN) for the automatic classification of CE-NBI images into benign and malignant groups with minimal human intervention. (2) Methods: A pretrained Res-Net50 model combined with the cut-off-layer technique was selected as the DCNN architecture. A dataset of 8181 CE-NBI images was used during the fine-tuning process in three experiments where several models were generated and validated. The accuracy, sensitivity, and specificity were calculated as the performance metrics in each validation and testing scenario. (3) Results: Out of a total of 72 trained and tested models in all experiments, Model 5 showed high performance. This model is considerably smaller than the full ResNet50 architecture and achieved the testing accuracy of 0.835 on the unseen data during the last experiment. (4) Conclusion: The proposed fine-tuned ResNet50 model showed a high performance to classify CE-NBI images into the benign and malignant groups and has the potential to be part of an assisted system for automatic laryngeal cancer detection. Full article
Show Figures

Figure 1

12 pages, 4390 KiB  
Article
Exponential-Distance Weights for Reducing Grid-like Artifacts in Patch-Based Medical Image Registration
by Liang Wu, Shunbo Hu and Changchun Liu
Sensors 2021, 21(21), 7112; https://0-doi-org.brum.beds.ac.uk/10.3390/s21217112 - 26 Oct 2021
Cited by 4 | Viewed by 2079
Abstract
Patch-based medical image registration has been well explored in recent decades. However, the patch fusion process can generate grid-like artifacts along the edge of patches for the following two reasons: firstly, in order to ensure the same size of input and output, zero-padding [...] Read more.
Patch-based medical image registration has been well explored in recent decades. However, the patch fusion process can generate grid-like artifacts along the edge of patches for the following two reasons: firstly, in order to ensure the same size of input and output, zero-padding is used, which causes uncertainty in the edges of the output feature map during the feature extraction process; secondly, the sliding window extraction patch with different strides will result in different degrees of grid-like artifacts. In this paper, we propose an exponential-distance-weighted (EDW) method to remove grid-like artifacts. To consider the uncertainty of predictions near patch edges, we used an exponential function to convert the distance from the point in the overlapping regions to the center point of the patch into a weighting coefficient. This gave lower weights to areas near the patch edges, to decrease the uncertainty predictions. Finally, the dense displacement field was obtained by this EDW weighting method. We used the OASIS-3 dataset to evaluate the performance of our method. The experimental results show that the proposed EDW patch fusion method removed grid-like artifacts and improved the dice similarity coefficient superior to those of several state-of-the-art methods. The proposed fusion method can be used together with any patch-based registration model. Full article
Show Figures

Figure 1

13 pages, 18090 KiB  
Article
Deep Recursive Bayesian Tracking for Fully Automatic Centerline Extraction of Coronary Arteries in CT Images
by Byunghwan Jeon
Sensors 2021, 21(18), 6087; https://0-doi-org.brum.beds.ac.uk/10.3390/s21186087 - 10 Sep 2021
Cited by 5 | Viewed by 2231
Abstract
Extraction of coronary arteries in coronary computed tomography (CT) angiography is a prerequisite for the quantification of coronary lesions. In this study, we propose a tracking method combining a deep convolutional neural network (DNN) and particle filtering method to identify the trajectories from [...] Read more.
Extraction of coronary arteries in coronary computed tomography (CT) angiography is a prerequisite for the quantification of coronary lesions. In this study, we propose a tracking method combining a deep convolutional neural network (DNN) and particle filtering method to identify the trajectories from the coronary ostium to each distal end from 3D CT images. The particle filter, as a non-linear approximator, is an appropriate tracking framework for such thin and elongated structures; however, the robust ‘vesselness’ measurement is essential for extracting coronary centerlines. Importantly, we employed the DNN to robustly measure the vesselness using patch images, and we integrated softmax values to the likelihood function in our particle filtering framework. Tangent patches represent cross-sections of coronary arteries of circular shapes. Thus, 2D tangent patches are assumed to include enough features of coronary arteries, and the use of 2D patches significantly reduces computational complexity. Because coronary vasculature has multiple bifurcations, we also modeled a method to detect branching sites by clustering the particle locations. The proposed method is compared with three commercial workstations and two conventional methods from the academic literature. Full article
Show Figures

Figure 1

Back to TopTop