sensors-logo

Journal Browser

Journal Browser

Computer-Aided Diagnosis Based on AI and Sensor Technology

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 August 2023) | Viewed by 3467

Special Issue Editor


E-Mail Website
Guest Editor
1. College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
2. Department of Electrical and Computer Engineering, University of Texas, EI Paso, TX 79968, USA
Interests: medical imaging informatics; telemedicine; computerized biomedical imaging and molecular imaging biomarker analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The development of medical sensing technology enables the real-time detection of a patient’s vital signs. This technology improves the efficiency of medical care, including the early detection of diseases, diagnosis and treatment evaluation, etc. The large amount of data generated by sensors has led to the maturity of deep learning and big data analysis technology. However, massive data also come with a new set of challenges. The question of how to make computer-aided diagnosis more intelligent, convenient and efficient through sensing technology has led to deep learning and big data analysis technology becoming a hot topic.

Therefore, this Special Issue seeks the latest fundamental advances in addressing the challenges of medical Artificial Intelligence. Specifically, this issue will explore the challenges faced by practical applications and propose feasible solutions through advanced deep learning and big data technologies. Both application and methodological research studies are welcome. The current leading topics include but are not limited to the following:

  • AI-based clinical decision-making;
  • Biomedical information processing;
  • Computational intelligence in bio- and clinical medicine;
  • Computer-aided diagnosis model;
  • Data analytics and mining for biomedical decision support;
  • Data science theory, methodologies and techniques;
  • Healthcare application and big data analysis;
  • Intelligent and process-aware information systems in healthcare and medicine;
  • Intelligent detection system;
  • Intelligent devices and instruments;
  • Machine learning theory, methodology and algorithms;
  • Medical knowledge engineering;
  • Natural language processing in medicine;
  • New computational platforms and models for biomedicine;
  • Sensor fusion of biomedical data;

Prof. Dr. Wei Qian
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 10833 KiB  
Article
2S-BUSGAN: A Novel Generative Adversarial Network for Realistic Breast Ultrasound Image with Corresponding Tumor Contour Based on Small Datasets
by Jie Luo, Heqing Zhang, Yan Zhuang, Lin Han, Ke Chen, Zhan Hua, Cheng Li and Jiangli Lin
Sensors 2023, 23(20), 8614; https://0-doi-org.brum.beds.ac.uk/10.3390/s23208614 - 20 Oct 2023
Viewed by 926
Abstract
Deep learning (DL) models in breast ultrasound (BUS) image analysis face challenges with data imbalance and limited atypical tumor samples. Generative Adversarial Networks (GAN) address these challenges by providing efficient data augmentation for small datasets. However, current GAN approaches fail to capture the [...] Read more.
Deep learning (DL) models in breast ultrasound (BUS) image analysis face challenges with data imbalance and limited atypical tumor samples. Generative Adversarial Networks (GAN) address these challenges by providing efficient data augmentation for small datasets. However, current GAN approaches fail to capture the structural features of BUS and generated images lack structural legitimacy and are unrealistic. Furthermore, generated images require manual annotation for different downstream tasks before they can be used. Therefore, we propose a two-stage GAN framework, 2s-BUSGAN, for generating annotated BUS images. It consists of the Mask Generation Stage (MGS) and the Image Generation Stage (IGS), generating benign and malignant BUS images using corresponding tumor contours. Moreover, we employ a Feature-Matching Loss (FML) to enhance the quality of generated images and utilize a Differential Augmentation Module (DAM) to improve GAN performance on small datasets. We conduct experiments on two datasets, BUSI and Collected. Moreover, results indicate that the quality of generated images is improved compared with traditional GAN methods. Additionally, our generated images underwent evaluation by ultrasound experts, demonstrating the possibility of deceiving doctors. A comparative evaluation showed that our method also outperforms traditional GAN methods when applied to training segmentation and classification models. Our method achieved a classification accuracy of 69% and 85.7% on two datasets, respectively, which is about 3% and 2% higher than that of the traditional augmentation model. The segmentation model trained using the 2s-BUSGAN augmented datasets achieved DICE scores of 75% and 73% on the two datasets, respectively, which were higher than the traditional augmentation methods. Our research tackles imbalanced and limited BUS image data challenges. Our 2s-BUSGAN augmentation method holds potential for enhancing deep learning model performance in the field. Full article
(This article belongs to the Special Issue Computer-Aided Diagnosis Based on AI and Sensor Technology)
Show Figures

Figure 1

21 pages, 4502 KiB  
Article
TDFusion: When Tensor Decomposition Meets Medical Image Fusion in the Nonsubsampled Shearlet Transform Domain
by Rui Zhang, Zhongyang Wang, Haoze Sun, Lizhen Deng and Hu Zhu
Sensors 2023, 23(14), 6616; https://0-doi-org.brum.beds.ac.uk/10.3390/s23146616 - 23 Jul 2023
Cited by 1 | Viewed by 1003
Abstract
In this paper, a unified optimization model for medical image fusion based on tensor decomposition and the non-subsampled shearlet transform (NSST) is proposed. The model is based on the NSST method and the tensor decomposition method to fuse the high-frequency (HF) and low-frequency [...] Read more.
In this paper, a unified optimization model for medical image fusion based on tensor decomposition and the non-subsampled shearlet transform (NSST) is proposed. The model is based on the NSST method and the tensor decomposition method to fuse the high-frequency (HF) and low-frequency (LF) parts of two source images to obtain a mixed-frequency fused image. In general, we integrate low-frequency and high-frequency information from the perspective of tensor decomposition (TD) fusion. Due to the structural differences between the high-frequency and low-frequency representations, potential information loss may occur in the fused images. To address this issue, we introduce a joint static and dynamic guidance (JSDG) technique to complement the HF/LF information. To improve the result of the fused images, we combine the alternating direction method of multipliers (ADMM) algorithm with the gradient descent method for parameter optimization. Finally, the fused images are reconstructed by applying the inverse NSST to the fused high-frequency and low-frequency bands. Extensive experiments confirm the superiority of our proposed TDFusion over other comparison methods. Full article
(This article belongs to the Special Issue Computer-Aided Diagnosis Based on AI and Sensor Technology)
Show Figures

Figure 1

17 pages, 32212 KiB  
Article
A Hierarchical Siamese Network for Noninvasive Staging of Liver Fibrosis Based on US Image Pairs of the Liver and Spleen
by Xue Wang, Ling Song, Yan Zhuang, Lin Han, Ke Chen, Jiangli Lin and Yan Luo
Sensors 2023, 23(12), 5450; https://0-doi-org.brum.beds.ac.uk/10.3390/s23125450 - 8 Jun 2023
Viewed by 1142
Abstract
Due to the heterogeneity of ultrasound (US) images and the indeterminate US texture of liver fibrosis (LF), automatic evaluation of LF based on US images is still challenging. Thus, this study aimed to propose a hierarchical Siamese network that combines the information from [...] Read more.
Due to the heterogeneity of ultrasound (US) images and the indeterminate US texture of liver fibrosis (LF), automatic evaluation of LF based on US images is still challenging. Thus, this study aimed to propose a hierarchical Siamese network that combines the information from liver and spleen US images to improve the accuracy of LF grading. There were two stages in the proposed method. In stage one, a dual-channel Siamese network was trained to extract features from paired liver and spleen patches that were cropped from US images to avoid vascular interferences. Subsequently, the L1 distance was used to quantify the liver–spleen differences (LSDs). In stage two, the pretrained weights from stage one were transferred into the Siamese feature extractor of the LF staging model, and a classifier was trained using the fusion of the liver and LSD features for LF staging. This study was retrospectively conducted on US images of 286 patients with histologically proven liver fibrosis stages. Our method achieved a precision and sensitivity of 93.92% and 91.65%, respectively, for cirrhosis (S4) diagnosis, which is about 8% higher than that of the baseline model. The accuracy of the advanced fibrosis (≥S3) diagnosis and the multi-staging of fibrosis (≤S2 vs. S3 vs. S4) both improved about 5% to reach 90.40% and 83.93%, respectively. This study proposed a novel method that combined hepatic and splenic US images and improved the accuracy of LF staging, which indicates the great potential of liver–spleen texture comparison in noninvasive assessment of LF based on US images. Full article
(This article belongs to the Special Issue Computer-Aided Diagnosis Based on AI and Sensor Technology)
Show Figures

Figure 1

Back to TopTop