sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence-Based Applications in Medical Imaging

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (15 April 2023) | Viewed by 17196

Special Issue Editors

FEIT, UTS, 15 Broadway, University of Technology Sydney, Ultimo, NSW 2007, Australia
Interests: rehabilitation engineering; biomedical instrumentation; physiological system modeling; data acquisition and distribution; system control and parameter identification
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Biomedical Engineering, Faculty of Engineering and IT, University of Technology Sydney (UTS), Ultimo, NSW 2007, Australia
Interests: single molecule imaging; super resolution imaging; biophysics; chromatin structure; cardiovascular disease

E-Mail Website
Guest Editor
Graduate School of Biomedical Engineering, University of New South Wales (UNSW), Sydney, NSW 2052, Australia
Interests: biomedical system modelling and control; data analysis; machine learning

Special Issue Information

Dear Colleagues,

Doctors have used medical images to diagnose diseases for decades. Using artificial intelligence (especially deep learning) to process and recognize medical images is widely practiced.

There are currently three main types of medical images. Radiographic images are used to show some features at the organ level; molecular images are obtained through electron microscopy and can show details at the molecular level; histopathological images and molecular images are obtained by invasive methods. Due to the breakthrough of AI, especially deep learning, it is shown in previous research that all these three typical medical images have already been applied for disease detection, diagnosis, and prognosis prediction by using artificial intelligence-based medical image processing methods. These AI-based approaches have even been utilized to identify disease biomarkers for disease progression prediction and earlier and more accurate diagnosis, which demonstrates that more medical expenses and lives can be saved. Therefore, AI-based image processing and medical applications have significant economic and social benefits and have attracted much attention recently.

This Special Issue will focus on AI-based solutions and applications developed to meet the challenges of medical image processing, analysis and classification. In particular, it will identify innovative AI-based technologies for disease detection, diagnosis, and prognosis prediction. It will focus on how to use these technologies to improve the quality and efficiency of medical image-based diagnosis in current and future applications. It will include technologies such as denoising, segmentation, clustering, feature extraction, classification, and rapid and effective disease diagnosis.

The Special Issue will welcome submissions of original research articles, case studies, and critical reviews on a wide range of topics including, but not limited to:

2D/3D/4D image segmentation using deep learning;

Artificial Intelligence solutions for the analysis of X-ray images;

AI-based detection and classification of knee diseases;

Biometric recognition system using deep learning;

AI-based reconstruction, segmentation, registration, and classification of MRI images;

Artificial Intelligence in lung tumor/cancer imaging diagnoses;

AI applications in molecular imaging and nuclear medicine;

AI models for imaging diagnosis;

AI-aided diagnosis for liver tumors in ultrasonography;

Breast imaging studies using Artificial Intelligence;

AI-based automated detection of particles in 2D and 3D images;

AI-based 4d medical image computing and visualization.

Dr. Steven Su
Dr. Qian Peter Su
Dr. Ahmadreza Argha
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Multimodal medical image
  • Medical image analysis
  • Medical image processing
  • Image segmentation
  • Image classification
  • Health informatics

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 50595 KiB  
Article
Enhanced Deep-Learning-Based Automatic Left-Femur Segmentation Scheme with Attribute Augmentation
by Kamonchat Apivanichkul, Pattarapong Phasukkit, Pittaya Dankulchai, Wiwatchai Sittiwong and Tanun Jitwatcharakomol
Sensors 2023, 23(12), 5720; https://0-doi-org.brum.beds.ac.uk/10.3390/s23125720 - 19 Jun 2023
Cited by 3 | Viewed by 905
Abstract
This research proposes augmenting cropped computed tomography (CT) slices with data attributes to enhance the performance of a deep-learning-based automatic left-femur segmentation scheme. The data attribute is the lying position for the left-femur model. In the study, the deep-learning-based automatic left-femur segmentation scheme [...] Read more.
This research proposes augmenting cropped computed tomography (CT) slices with data attributes to enhance the performance of a deep-learning-based automatic left-femur segmentation scheme. The data attribute is the lying position for the left-femur model. In the study, the deep-learning-based automatic left-femur segmentation scheme was trained, validated, and tested using eight categories of CT input datasets for the left femur (F-I–F-VIII). The segmentation performance was assessed by Dice similarity coefficient (DSC) and intersection over union (IoU); and the similarity between the predicted 3D reconstruction images and ground-truth images was determined by spectral angle mapper (SAM) and structural similarity index measure (SSIM). The left-femur segmentation model achieved the highest DSC (88.25%) and IoU (80.85%) under category F-IV (using cropped and augmented CT input datasets with large feature coefficients), with an SAM and SSIM of 0.117–0.215 and 0.701–0.732. The novelty of this research lies in the use of attribute augmentation in medical image preprocessing to enhance the performance of the deep-learning-based automatic left-femur segmentation scheme. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Applications in Medical Imaging)
Show Figures

Figure 1

21 pages, 4244 KiB  
Article
Image Recovery from Synthetic Noise Artifacts in CT Scans Using Modified U-Net
by Rudy Gunawan, Yvonne Tran, Jinchuan Zheng, Hung Nguyen and Rifai Chai
Sensors 2022, 22(18), 7031; https://0-doi-org.brum.beds.ac.uk/10.3390/s22187031 - 16 Sep 2022
Cited by 5 | Viewed by 1813
Abstract
Computed Tomography (CT) is commonly used for cancer screening as it utilizes low radiation for the scan. One problem with low-dose scans is the noise artifacts associated with low photon count that can lead to a reduced success rate of cancer detection during [...] Read more.
Computed Tomography (CT) is commonly used for cancer screening as it utilizes low radiation for the scan. One problem with low-dose scans is the noise artifacts associated with low photon count that can lead to a reduced success rate of cancer detection during radiologist assessment. The noise had to be removed to restore detail clarity. We propose a noise removal method using a new model Convolutional Neural Network (CNN). Even though the network training time is long, the result is better than other CNN models in quality score and visual observation. The proposed CNN model uses a stacked modified U-Net with a specific number of feature maps per layer to improve the image quality, observable on an average PSNR quality score improvement out of 174 images. The next best model has 0.54 points lower in the average score. The score difference is less than 1 point, but the image result is closer to the full-dose scan image. We used separate testing data to clarify that the model can handle different noise densities. Besides comparing the CNN configuration, we discuss the denoising quality of CNN compared to classical denoising in which the noise characteristics affect quality. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Applications in Medical Imaging)
Show Figures

Graphical abstract

16 pages, 4798 KiB  
Article
Automated Precancerous Lesion Screening Using an Instance Segmentation Technique for Improving Accuracy
by Patiyus Agustiansyah, Siti Nurmaini, Laila Nuranna, Irfannuddin Irfannuddin, Rizal Sanif, Legiran Legiran, Muhammad Naufal Rachmatullah, Gavira Olipa Florina, Ade Iriani Sapitri and Annisa Darmawahyuni
Sensors 2022, 22(15), 5489; https://0-doi-org.brum.beds.ac.uk/10.3390/s22155489 - 22 Jul 2022
Cited by 2 | Viewed by 2280
Abstract
Precancerous screening using visual inspection with acetic acid (VIA) is suggested by the World Health Organization (WHO) for low–middle-income countries (LMICs). However, because of the limited number of gynecological oncologist clinicians in LMICs, VIA screening is primarily performed by general clinicians, nurses, or [...] Read more.
Precancerous screening using visual inspection with acetic acid (VIA) is suggested by the World Health Organization (WHO) for low–middle-income countries (LMICs). However, because of the limited number of gynecological oncologist clinicians in LMICs, VIA screening is primarily performed by general clinicians, nurses, or midwives (called medical workers). However, not being able to recognize the significant pathophysiology of human papilloma virus (HPV) infection in terms of the columnar epithelial-cell, squamous epithelial-cell, and white-spot regions with abnormal blood vessels may be further aggravated by VIA screening, which achieves a wide range of sensitivity (49–98%) and specificity (75–91%); this might lead to a false result and high interobserver variances. Hence, the automated detection of the columnar area (CA), subepithelial region of the squamocolumnar junction (SCJ), and acetowhite (AW) lesions is needed to support an accurate diagnosis. This study proposes a mask-RCNN architecture to simultaneously segment, classify, and detect CA and AW lesions. We conducted several experiments using 262 images of VIA+ cervicograms, and 222 images of VIA−cervicograms. The proposed model provided a satisfactory intersection over union performance for the CA of about 63.60%, and AW lesions of about 73.98%. The dice similarity coefficient performance was about 75.67% for the CA and about 80.49% for the AW lesion. It also performed well in cervical-cancer precursor-lesion detection, with a mean average precision of about 86.90% for the CA and of about 100% for the AW lesion, while also achieving 100% sensitivity and 92% specificity. Our proposed model with the instance segmentation approach can segment, detect, and classify cervical-cancer precursor lesions with satisfying performance only from a VIA cervicogram. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Applications in Medical Imaging)
Show Figures

Figure 1

17 pages, 5703 KiB  
Article
Multi-Scale Attention Convolutional Network for Masson Stained Bile Duct Segmentation from Liver Pathology Images
by Chun-Han Su, Pau-Choo Chung, Sheng-Fung Lin, Hung-Wen Tsai, Tsung-Lung Yang and Yu-Chieh Su
Sensors 2022, 22(7), 2679; https://0-doi-org.brum.beds.ac.uk/10.3390/s22072679 - 31 Mar 2022
Cited by 8 | Viewed by 2410
Abstract
In clinical practice, the Ishak Score system would be adopted to perform the evaluation of the grading and staging of hepatitis according to whether portal areas have fibrous expansion, bridging with other portal areas, or bridging with central veins. Based on these staging [...] Read more.
In clinical practice, the Ishak Score system would be adopted to perform the evaluation of the grading and staging of hepatitis according to whether portal areas have fibrous expansion, bridging with other portal areas, or bridging with central veins. Based on these staging criteria, it is necessary to identify portal areas and central veins when performing the Ishak Score staging. The bile ducts have variant types and are very difficult to be detected under a single magnification, hence pathologists must observe bile ducts at different magnifications to obtain sufficient information. This pathologic examinations in routine clinical practice, however, would result in the labor intensive and expensive examination process. Therefore, the automatic quantitative analysis for pathologic examinations has had an increased demand and attracted significant attention recently. A multi-scale inputs of attention convolutional network is proposed in this study to simulate pathologists’ examination procedure for observing bile ducts under different magnifications in liver biopsy. The proposed multi-scale attention network integrates cell-level information and adjacent structural feature information for bile duct segmentation. In addition, the attention mechanism of proposed model enables the network to focus the segmentation task on the input of high magnification, reducing the influence from low magnification input, but still helps to provide wider field of surrounding information. In comparison with existing models, including FCN, U-Net, SegNet, DeepLabv3 and DeepLabv3-plus, the experimental results demonstrated that the proposed model improved the segmentation performance on Masson bile duct segmentation task with 72.5% IOU and 84.1% F1-score. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Applications in Medical Imaging)
Show Figures

Figure 1

20 pages, 1986 KiB  
Article
Brain Magnetic Resonance Imaging Classification Using Deep Learning Architectures with Gender and Age
by Imayanmosha Wahlang, Arnab Kumar Maji, Goutam Saha, Prasun Chakrabarti, Michal Jasinski, Zbigniew Leonowicz and Elzbieta Jasinska
Sensors 2022, 22(5), 1766; https://0-doi-org.brum.beds.ac.uk/10.3390/s22051766 - 24 Feb 2022
Cited by 27 | Viewed by 2883
Abstract
Usage of effective classification techniques on Magnetic Resonance Imaging (MRI) helps in the proper diagnosis of brain tumors. Previous studies have focused on the classification of normal (nontumorous) or abnormal (tumorous) brain MRIs using methods such as Support Vector Machine (SVM) and AlexNet. [...] Read more.
Usage of effective classification techniques on Magnetic Resonance Imaging (MRI) helps in the proper diagnosis of brain tumors. Previous studies have focused on the classification of normal (nontumorous) or abnormal (tumorous) brain MRIs using methods such as Support Vector Machine (SVM) and AlexNet. In this paper, deep learning architectures are used to classify brain MRI images into normal or abnormal. Gender and age are added as higher attributes for more accurate and meaningful classification. A deep learning Convolutional Neural Network (CNN)-based technique and a Deep Neural Network (DNN) are also proposed for effective classification. Other deep learning architectures such as LeNet, AlexNet, ResNet, and traditional approaches such as SVM are also implemented to analyze and compare the results. Age and gender biases are found to be more useful and play a key role in classification, and they can be considered essential factors in brain tumor analysis. It is also worth noting that, in most circumstances, the proposed technique outperforms both existing SVM and AlexNet. The overall accuracy obtained is 88% (LeNet Inspired Model) and 80% (CNN-DNN) compared to SVM (82%) and AlexNet (64%), with best accuracy of 100%, 92%, 92%, and 81%, respectively. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Applications in Medical Imaging)
Show Figures

Figure 1

20 pages, 9865 KiB  
Article
Deep Learning-Based Computer-Aided Fetal Echocardiography: Application to Heart Standard View Segmentation for Congenital Heart Defects Detection
by Siti Nurmaini, Muhammad Naufal Rachmatullah, Ade Iriani Sapitri, Annisa Darmawahyuni, Bambang Tutuko, Firdaus Firdaus, Radiyati Umi Partan and Nuswil Bernolian
Sensors 2021, 21(23), 8007; https://0-doi-org.brum.beds.ac.uk/10.3390/s21238007 - 30 Nov 2021
Cited by 28 | Viewed by 5174
Abstract
Accurate segmentation of fetal heart in echocardiography images is essential for detecting the structural abnormalities such as congenital heart defects (CHDs). Due to the wide variations attributed to different factors, such as maternal obesity, abdominal scars, amniotic fluid volume, and great vessel connections, [...] Read more.
Accurate segmentation of fetal heart in echocardiography images is essential for detecting the structural abnormalities such as congenital heart defects (CHDs). Due to the wide variations attributed to different factors, such as maternal obesity, abdominal scars, amniotic fluid volume, and great vessel connections, this process is still a challenging problem. CHDs detection with expertise in general are substandard; the accuracy of measurements remains highly dependent on humans’ training, skills, and experience. To make such a process automatic, this study proposes deep learning-based computer-aided fetal heart echocardiography examinations with an instance segmentation approach, which inherently segments the four standard heart views and detects the defect simultaneously. We conducted several experiments with 1149 fetal heart images for predicting 24 objects, including four shapes of fetal heart standard views, 17 objects of heart-chambers in each view, and three cases of congenital heart defect. The result showed that the proposed model performed satisfactory performance for standard views segmentation, with a 79.97% intersection over union and 89.70% Dice coefficient similarity. It also performed well in the CHDs detection, with mean average precision around 98.30% for intra-patient variation and 82.42% for inter-patient variation. We believe that automatic segmentation and detection techniques could make an important contribution toward improving congenital heart disease diagnosis rates. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Applications in Medical Imaging)
Show Figures

Figure 1

Back to TopTop