Applications of Artificial Intelligence in Medical Imaging

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (15 December 2022) | Viewed by 24034

Special Issue Editor


E-Mail Website
Guest Editor
Mathematics Research Centre, Academy of Athens, 10679 Athens, Greece
Interests: tomography; inverse problems; mathematical optimisation; cancer informatics; physics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Medical imaging aims to improve the diagnostic performance of various diseases and to help us to understand the underlying pathophysiology. Artificial intelligence (AI) is the new frontier in medical imaging offering innovative solutions in multiple tasks, such as reconstruction, segmentation, registration, and computer-aided diagnosis. The field of AI itself is rapidly evolving with new, exciting advancements, such as explainable AI and artificial general intelligence (AGI). The scope of this Special Issue is to translate recent advancements in the field of AI into medical imaging aiming to diagnose diseases earlier and more accurately, facilitate drug discovery, and enable personalized patient therapy.

Dr. Nikolaos Dikaios
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • medical imaging
  • magnetic resonance imaging
  • computed tomography
  • positron emission tomography
  • optical computed tomography
  • ultrasound
  • computer-aided diagnosis
  • image reconstruction
  • image registration
  • image segmentation

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 4142 KiB  
Article
Class Imbalanced Medical Image Classification Based on Semi-Supervised Federated Learning
by Wei Liu, Jiaqing Mo and Furu Zhong
Appl. Sci. 2023, 13(4), 2109; https://0-doi-org.brum.beds.ac.uk/10.3390/app13042109 - 06 Feb 2023
Cited by 2 | Viewed by 1875
Abstract
In recent years, the application of federated learning to medical image classification has received much attention and achieved some results in the study of semi-supervised problems, but there are problems such as the lack of thorough study of labeled data, and serious model [...] Read more.
In recent years, the application of federated learning to medical image classification has received much attention and achieved some results in the study of semi-supervised problems, but there are problems such as the lack of thorough study of labeled data, and serious model degradation in the case of small batches in the face of the data category imbalance problem. In this paper, we propose a federated learning method using a combination of regularization constraints and pseudo-label construction, where the federated learning framework consists of a central server and local clients containing only unlabeled data, and labeled data are passed from the central server to each local client to take part in semi-supervised training. We first extracted the class imbalance factors from the labeled data to participate in the training to achieve label constraints, and secondly fused the labeled data with the unlabeled data at the local client to construct augmented samples, looped through to generate pseudo-labels. The purpose of combining these two methods is to select fewer classes with higher probability, thus providing an effective solution to the class imbalance problem and improving the sensitivity of the network to unlabeled data. We experimentally validated our method on a publicly available medical image classification data set consisting of 10,015 images with small batches of data. Our method improved the AUC by 7.35% and the average class sensitivity by 1.34% compared to the state-of-the-art methods, which indicates that our method maintains a strong learning capability even with an unbalanced data set with fewer batches of trained models. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

10 pages, 10474 KiB  
Article
Learning Models for Bone Marrow Edema Detection in Magnetic Resonance Imaging
by Gonçalo Ribeiro, Tania Pereira, Francisco Silva, Joana Sousa, Diogo Costa Carvalho, Sílvia Costa Dias and Hélder P. Oliveira
Appl. Sci. 2023, 13(2), 1024; https://0-doi-org.brum.beds.ac.uk/10.3390/app13021024 - 12 Jan 2023
Cited by 1 | Viewed by 2612
Abstract
Bone marrow edema (BME) is the term given to the abnormal fluid signal seen within the bone marrow on magnetic resonance imaging (MRI). It usually indicates the presence of underlying pathology and is associated with a myriad of conditions/causes. However, it can be [...] Read more.
Bone marrow edema (BME) is the term given to the abnormal fluid signal seen within the bone marrow on magnetic resonance imaging (MRI). It usually indicates the presence of underlying pathology and is associated with a myriad of conditions/causes. However, it can be misleading, as in some cases, it may be associated with normal changes in the bone, especially during the growth period of childhood, and objective methods for assessment are lacking. In this work, learning models for BME detection were developed. Transfer learning was used to overcome the size limitations of the dataset, and two different regions of interest (ROI) were defined and compared to evaluate their impact on the performance of the model: bone segmention and intensity mask. The best model was obtained for the high intensity masking technique, which achieved a balanced accuracy of 0.792 ± 0.034. This study represents a comparison of different models and data regularization techniques for BME detection and showed promising results, even in the most difficult range of ages: children and adolescents. The application of machine learning methods will help to decrease the dependence on the clinicians, providing an initial stratification of the patients based on the probability of edema presence and supporting their decisions on the diagnosis. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

22 pages, 3682 KiB  
Article
A Self-Supervised Detail-Sensitive ViT-Based Model for COVID-19 X-ray Image Diagnosis: SDViT
by Kang An and Yanping Zhang
Appl. Sci. 2023, 13(1), 454; https://0-doi-org.brum.beds.ac.uk/10.3390/app13010454 - 29 Dec 2022
Viewed by 1090
Abstract
COVID-19 has led to a severe impact on the society and healthcare system, with early diagnosis and effective treatment becoming critical. The Chest X-ray (CXR) is the most time-saving and cost-effective tool for diagnosing COVID-19. However, manual diagnosis through human eyes is time-consuming [...] Read more.
COVID-19 has led to a severe impact on the society and healthcare system, with early diagnosis and effective treatment becoming critical. The Chest X-ray (CXR) is the most time-saving and cost-effective tool for diagnosing COVID-19. However, manual diagnosis through human eyes is time-consuming and tends to introduce errors. With the challenge of a large number of infections and a shortage of medical resources, a fast and accurate diagnosis technique is required. Manual detection is time-consuming, depends on individual experience, and tends to easily introduce errors. Deep learning methods can be used to develop automated detection and computer-aided diagnosis. However, they require a large amount of data, which is not practical due to the limited annotated CXR images. In this research, SDViT, an approach based on transformers, is proposed for COVID-19 diagnosis through image classification. We propose three innovations, namely, self-supervised learning, detail correction path (DCP), and domain transfer, then add them to the ViT Transformer architecture. Based on experimental results, our proposed method achieves an accuracy of 95.2381%, which is better performance compared to well-established methods on the X-ray Image dataset, along with the highest precision (0.952310), recall (0.963964), and F1-score (0.958102). Extensive experiments show that our model achieves the best performance on the synthetic-covid-cxr dataset as well. The experimental results demonstrate the advantages of our design for the classification task of COVID-19 X-ray images. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

14 pages, 1982 KiB  
Article
Weakly Supervised Learning with Positive and Unlabeled Data for Automatic Brain Tumor Segmentation
by Daniel Wolf, Sebastian Regnery, Rafal Tarnawski, Barbara Bobek-Billewicz, Joanna Polańska and Michael Götz
Appl. Sci. 2022, 12(21), 10763; https://0-doi-org.brum.beds.ac.uk/10.3390/app122110763 - 24 Oct 2022
Cited by 4 | Viewed by 1520
Abstract
A major obstacle to the learning-based segmentation of healthy and tumorous brain tissue is the requirement of having to create a fully labeled training dataset. Obtaining these data requires tedious and error-prone manual labeling with respect to both tumor and non-tumor areas. To [...] Read more.
A major obstacle to the learning-based segmentation of healthy and tumorous brain tissue is the requirement of having to create a fully labeled training dataset. Obtaining these data requires tedious and error-prone manual labeling with respect to both tumor and non-tumor areas. To mitigate this problem, we propose a new method to obtain high-quality classifiers from a dataset with only small parts of labeled tumor areas. This is achieved by using positive and unlabeled learning in conjunction with a domain adaptation technique. The proposed approach leverages the tumor volume, and we show that it can be either derived with simple measures or completely automatic with a proposed estimation method. While learning from sparse samples allows reducing the necessary annotation time from 4 h to 5 min, we show that the proposed approach further reduces the necessary annotation by roughly 50% while maintaining comparative accuracies compared to traditionally trained classifiers with this approach. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medical Imaging)
Show Figures

Graphical abstract

14 pages, 2446 KiB  
Article
A Hybrid Linear Iterative Clustering and Bayes Classification-Based GrabCut Segmentation Scheme for Dynamic Detection of Cervical Cancer
by Anousouya Devi Magaraja, Ezhilarasie Rajapackiyam, Vaitheki Kanagaraj, Suresh Joseph Kanagaraj, Ketan Kotecha, Subramaniyaswamy Vairavasundaram, Mayuri Mehta and Vasile Palade
Appl. Sci. 2022, 12(20), 10522; https://0-doi-org.brum.beds.ac.uk/10.3390/app122010522 - 18 Oct 2022
Cited by 4 | Viewed by 1409
Abstract
Cervical cancer earlier detection remains indispensable for enhancing the survival rate probability among women patients worldwide. The early detection of cervical cancer is done relatively by using the Pap Smear cell Test. This method of detection is challenged by the degradation phenomenon within [...] Read more.
Cervical cancer earlier detection remains indispensable for enhancing the survival rate probability among women patients worldwide. The early detection of cervical cancer is done relatively by using the Pap Smear cell Test. This method of detection is challenged by the degradation phenomenon within the image segmentation task that arises when the superpixel count is minimized. This paper introduces a Hybrid Linear Iterative Clustering and Bayes classification-based GrabCut Segmentation Technique (HLC-BC-GCST) for the dynamic detection of Cervical cancer. In this proposed HLC-BC-GCST approach, the Linear Iterative Clustering process is employed to cluster the potential features of the preprocessed image, which is then combined with GrabCut to prevent the issues that arise when the number of superpixels is minimized. In addition, the proposed HLC-BC-GCST scheme benefits of the advantages of the Gaussian mixture model (GMM) on the extracted features from the iterative clustering method, based on which the mapping is performed to describe the energy function. Then, Bayes classification is used for reconstructing the graph cut model from the extracted energy function derived from the GMM model-based Linear Iterative Clustering features for better computation and implementation. Finally, the boundary optimization method is utilized to considerably minimize the roughness of cervical cells, which contains the cytoplasm and nuclei regions, using the GrabCut algorithm to facilitate improved segmentation accuracy. The results of the proposed HLC-BC-GCST scheme are 6% better than the results obtained by other standard detection approaches of cervical cancer using graph cuts. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

18 pages, 5654 KiB  
Article
Aquila Optimizer with Bayesian Neural Network for Breast Cancer Detection on Ultrasound Images
by Marwa Obayya, Siwar Ben Haj Hassine, Sana Alazwari, Mohamed K. Nour, Abdullah Mohamed, Abdelwahed Motwakel, Ishfaq Yaseen, Abu Sarwar Zamani, Amgad Atta Abdelmageed and Gouse Pasha Mohammed
Appl. Sci. 2022, 12(17), 8679; https://0-doi-org.brum.beds.ac.uk/10.3390/app12178679 - 30 Aug 2022
Cited by 3 | Viewed by 1514
Abstract
Breast cancer is the second most dominant kind of cancer among women. Breast Ultrasound images (BUI) are commonly employed for the detection and classification of abnormalities that exist in the breast. The ultrasound images are necessary to develop artificial intelligence (AI) enabled diagnostic [...] Read more.
Breast cancer is the second most dominant kind of cancer among women. Breast Ultrasound images (BUI) are commonly employed for the detection and classification of abnormalities that exist in the breast. The ultrasound images are necessary to develop artificial intelligence (AI) enabled diagnostic support technologies. For improving the detection performance, Computer Aided Diagnosis (CAD) models are useful for breast cancer detection and classification. The current advancement of the deep learning (DL) model enables the detection and classification of breast cancer with the use of biomedical images. With this motivation, this article presents an Aquila Optimizer with Bayesian Neural Network for Breast Cancer Detection (AOBNN-BDNN) model on BUI. The presented AOBNN-BDNN model follows a series of processes to detect and classify breast cancer on BUI. To accomplish this, the AOBNN-BDNN model initially employs Wiener filtering (WF) related noise removal and U-Net segmentation as a pre-processing step. Besides, the SqueezeNet model derives a collection of feature vectors from the pre-processed image. Next, the BNN algorithm will be utilized to allocate appropriate class labels to the input images. Finally, the AO technique was exploited to fine-tune the parameters related to the BNN method so that the classification performance is improved. To validate the enhanced performance of the AOBNN-BDNN method, a wide experimental study is executed on benchmark datasets. A wide-ranging experimental analysis specified the enhancements of the AOBNN-BDNN method in recent techniques. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

22 pages, 6556 KiB  
Article
A Deep Learning Method for Early Detection of Diabetic Foot Using Decision Fusion and Thermal Images
by Khairul Munadi, Khairun Saddami, Maulisa Oktiana, Roslidar Roslidar, Kahlil Muchtar, Melinda Melinda, Rusdha Muharar, Maimun Syukri, Taufik Fuadi Abidin and Fitri Arnia
Appl. Sci. 2022, 12(15), 7524; https://0-doi-org.brum.beds.ac.uk/10.3390/app12157524 - 26 Jul 2022
Cited by 19 | Viewed by 3251
Abstract
Diabetes mellitus (DM) is one of the major diseases that cause death worldwide and lead to complications of diabetic foot ulcers (DFU). Improper and late handling of a diabetic foot patient can result in an amputation of the patient’s foot. Early detection of [...] Read more.
Diabetes mellitus (DM) is one of the major diseases that cause death worldwide and lead to complications of diabetic foot ulcers (DFU). Improper and late handling of a diabetic foot patient can result in an amputation of the patient’s foot. Early detection of DFU symptoms can be observed using thermal imaging with a computer-assisted classifier. Previous study of DFU detection using thermal image only achieved 97% of accuracy, and it has to be improved. This article proposes a novel framework for DFU classification based on thermal imaging using deep neural networks and decision fusion. Here, decision fusion combines the classification result from a parallel classifier. We used the convolutional neural network (CNN) model of ShuffleNet and MobileNetV2 as the baseline classifier. In developing the classifier model, firstly, the MobileNetV2 and ShuffleNet were trained using plantar thermogram datasets. Then, the classification results of those two models were fused using a novel decision fusion method to increase the accuracy rate. The proposed framework achieved 100% accuracy in classifying the DFU thermal images in binary classes of positive and negative cases. The accuracy of the proposed Decision Fusion (DF) was increased by about 3.4% from baseline ShuffleNet and MobileNetV2. Overall, the proposed framework outperformed in classifying the images compared with the state-of-the-art deep learning and the traditional machine-learning-based classifier. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

25 pages, 26446 KiB  
Article
H&E Multi-Laboratory Staining Variance Exploration with Machine Learning
by Fabi Prezja, Ilkka Pölönen, Sami Äyrämö, Pekka Ruusuvuori and Teijo Kuopio
Appl. Sci. 2022, 12(15), 7511; https://0-doi-org.brum.beds.ac.uk/10.3390/app12157511 - 26 Jul 2022
Cited by 5 | Viewed by 2537
Abstract
In diagnostic histopathology, hematoxylin and eosin (H&E) staining is a critical process that highlights salient histological features. Staining results vary between laboratories regardless of the histopathological task, although the method does not change. This variance can impair the accuracy of algorithms and histopathologists’ [...] Read more.
In diagnostic histopathology, hematoxylin and eosin (H&E) staining is a critical process that highlights salient histological features. Staining results vary between laboratories regardless of the histopathological task, although the method does not change. This variance can impair the accuracy of algorithms and histopathologists’ time-to-insight. Investigating this variance can help calibrate stain normalization tasks to reverse this negative potential. With machine learning, this study evaluated the staining variance between different laboratories on three tissue types. We received H&E-stained slides from 66 different laboratories. Each slide contained kidney, skin, and colon tissue samples stained by the method routinely used in each laboratory. The samples were digitized and summarized as red, green, and blue channel histograms. Dimensions were reduced using principal component analysis. The data projected by principal components were inserted into the k-means clustering algorithm and the k-nearest neighbors classifier with the laboratories as the target. The k-means silhouette index indicated that K = 2 clusters had the best separability in all tissue types. The supervised classification result showed laboratory effects and tissue-type bias. Both supervised and unsupervised approaches suggested that tissue type also affected inter-laboratory variance. We suggest tissue type to also be considered upon choosing the staining and color-normalization approach. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

19 pages, 41116 KiB  
Article
Residual-Attention UNet++: A Nested Residual-Attention U-Net for Medical Image Segmentation
by Zan Li, Hong Zhang, Zhengzhen Li and Zuyue Ren
Appl. Sci. 2022, 12(14), 7149; https://0-doi-org.brum.beds.ac.uk/10.3390/app12147149 - 15 Jul 2022
Cited by 16 | Viewed by 3710
Abstract
Image segmentation is a basic technology in the field of image processing and computer vision. Medical image segmentation is an important application field of image segmentation and plays an increasingly important role in clinical diagnosis and treatment. Deep learning has made great progress [...] Read more.
Image segmentation is a basic technology in the field of image processing and computer vision. Medical image segmentation is an important application field of image segmentation and plays an increasingly important role in clinical diagnosis and treatment. Deep learning has made great progress in medical image segmentation. In this paper, we proposed Residual-Attention UNet++, which is an extension of the UNet++ model with a residual unit and attention mechanism. Firstly, the residual unit improves the degradation problem. Secondly, the attention mechanism can increase the weight of the target area and suppress the background area irrelevant to the segmentation task. Three medical image datasets such as skin cancer, cell nuclei, and coronary artery in angiography were used to validate the proposed model. The results showed that the Residual-Attention UNet++ achieved superior evaluation scores with an Intersection over Union (IoU) of 82.32%, and a dice coefficient of 88.59% with the skin cancer dataset, a dice coefficient of 85.91%, and an IoU of 87.74% with the cell nuclei dataset and a dice coefficient of 72.48%, and an IoU of 66.57% with the angiography dataset. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

18 pages, 1943 KiB  
Article
An Attention-Based Convolutional Neural Network for Acute Lymphoblastic Leukemia Classification
by Muhammad Zakir Ullah, Yuanjie Zheng, Jingqi Song, Sehrish Aslam, Chenxi Xu, Gogo Dauda Kiazolu and Liping Wang
Appl. Sci. 2021, 11(22), 10662; https://0-doi-org.brum.beds.ac.uk/10.3390/app112210662 - 12 Nov 2021
Cited by 28 | Viewed by 2941
Abstract
Leukemia is a kind of blood cancer that influences people of all ages and is one of the leading causes of death worldwide. Acute lymphoblastic leukemia (ALL) is the most widely recognized type of leukemia found in the bone marrow of the human [...] Read more.
Leukemia is a kind of blood cancer that influences people of all ages and is one of the leading causes of death worldwide. Acute lymphoblastic leukemia (ALL) is the most widely recognized type of leukemia found in the bone marrow of the human body. Traditional disease diagnostic techniques like blood and bone marrow examinations are slow and painful, resulting in the demand for non-invasive and fast methods. This work presents a non-invasive, convolutional neural network (CNN) based approach that utilizes medical images to perform the diagnosis task. The proposed solution consisting of a CNN-based model uses an attention module called Efficient Channel Attention (ECA) with the visual geometry group from oxford (VGG16) to extract better quality deep features from the image dataset, leading to better feature representation and better classification results. The proposed method shows that the ECA module helps to overcome morphological similarities between ALL cancer and healthy cell images. Various augmentation techniques are also employed to increase the quality and quantity of training data. We used the classification of normal vs. malignant cells (C-NMC) dataset and divided it into seven folds based on subject-level variability, which is usually ignored in previous methods. Experimental results show that our proposed CNN model can successfully extract deep features and achieved an accuracy of 91.1%. The obtained findings show that the proposed method may be utilized to diagnose ALL and would help pathologists. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

Back to TopTop