Trustworthy Computer-Aided Diagnosis of Breast Cancer Using Ultrasound

A special issue of Healthcare (ISSN 2227-9032). This special issue belongs to the section "Health Informatics and Big Data".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 9702

Special Issue Editors

Department of Computer Science, University of Idaho, Idaho Falls, ID 83402, USA
Interests: deep learning; multitask learning; domain-enhanced learning; adversarial machine learning; image segmentation; medical image analysis; breast cancer detection

E-Mail Website
Guest Editor
Industrial Technology, University of Idaho, Idaho Falls, ID 83402, USA
Interests: biomedical informatics; machine learning and artificial intelligence; rehabilitation assessment; medical imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Breast cancer ranks first for incidence in females in 159 countries, and it alone accounted for approximately 25% of new female cancers globally in 2020. Recently, many studies demonstrated that breast ultrasound with advanced computer-aided diagnosis (CAD) systems could achieve high sensitivity (94%-95%) and high specificity (89%-93%) for breast cancer detection. To take advantage of the learning ability of complex models, the most advanced approaches trained deep learning models in an 'end-to-end’ manner, in which the input is breast images, and output is tumor regions and/or probabilities of malignancy. This 'end-to-end' design can reduce the operator dependence in cancer detection; however, it lacks the explainability to provide evidence to support the diagnostic decision. Another major weakness of existing CAD systems is the lack of robustness, since the excellent performance on images from one set often degrades significantly on images from a different set, or even on images from the same set with minor perturbations. The lack of explainability and robustness has a profound impact on the trustworthiness of these systems and represents a crucial barrier that impedes the adoption of CAD systems in healthcare. Thus, in this Special Issue, we welcome authors focusing on introducing the new methodologies for ultrasound-based trustworthy CAD for breast cancer detection, which are characterized by high generalizability, high explainability, and high robustness.

Dr. Min Xian
Dr. Aleksandar (Alex) Vakanski
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Healthcare is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • trustworthy computer-aided diagnosis
  • breast ultrasound
  • breast cancer detection
  • breast tumor segmentation
  • explainable/interpretable machine learning
  • robust deep learning models

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 691 KiB  
Article
A Novel Fuzzy Relative-Position-Coding Transformer for Breast Cancer Diagnosis Using Ultrasonography
by Yanhui Guo, Ruquan Jiang, Xin Gu, Heng-Da Cheng and Harish Garg
Healthcare 2023, 11(18), 2530; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare11182530 - 13 Sep 2023
Cited by 1 | Viewed by 961
Abstract
Breast cancer is a leading cause of death in women worldwide, and early detection is crucial for successful treatment. Computer-aided diagnosis (CAD) systems have been developed to assist doctors in identifying breast cancer on ultrasound images. In this paper, we propose a novel [...] Read more.
Breast cancer is a leading cause of death in women worldwide, and early detection is crucial for successful treatment. Computer-aided diagnosis (CAD) systems have been developed to assist doctors in identifying breast cancer on ultrasound images. In this paper, we propose a novel fuzzy relative-position-coding (FRPC) Transformer to classify breast ultrasound (BUS) images for breast cancer diagnosis. The proposed FRPC Transformer utilizes the self-attention mechanism of Transformer networks combined with fuzzy relative-position-coding to capture global and local features of the BUS images. The performance of the proposed method is evaluated on one benchmark dataset and compared with those obtained by existing Transformer approaches using various metrics. The experimental outcomes distinctly establish the superiority of the proposed method in achieving elevated levels of accuracy, sensitivity, specificity, and F1 score (all at 90.52%), as well as a heightened area under the receiver operating characteristic (ROC) curve (0.91), surpassing those attained by the original Transformer model (at 89.54%, 89.54%, 89.54%, and 0.89, respectively). Overall, the proposed FRPC Transformer is a promising approach for breast cancer diagnosis. It has potential applications in clinical practice and can contribute to the early detection of breast cancer. Full article
Show Figures

Figure 1

11 pages, 1396 KiB  
Article
Knowledge Tensor-Aided Breast Ultrasound Image Assistant Inference Framework
by Guanghui Li, Lingli Xiao, Guanying Wang, Ying Liu, Longzhong Liu and Qinghua Huang
Healthcare 2023, 11(14), 2014; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare11142014 - 13 Jul 2023
Cited by 3 | Viewed by 1205
Abstract
Breast cancer is one of the most prevalent cancers in women nowadays, and medical intervention at an early stage of cancer can significantly improve the prognosis of patients. Breast ultrasound (BUS) is a widely used tool for the early screening of breast cancer [...] Read more.
Breast cancer is one of the most prevalent cancers in women nowadays, and medical intervention at an early stage of cancer can significantly improve the prognosis of patients. Breast ultrasound (BUS) is a widely used tool for the early screening of breast cancer in primary care hospitals but it relies heavily on the ability and experience of physicians. Accordingly, we propose a knowledge tensor-based Breast Imaging Reporting and Data System (BI-RADS)-score-assisted generalized inference model, which uses the BI-RADS score of senior physicians as the gold standard to construct a knowledge tensor model to infer the benignity and malignancy of breast tumors and axes the diagnostic results against those of junior physicians to provide an aid for breast ultrasound diagnosis. The experimental results showed that the diagnostic AUC of the knowledge tensor constructed using the BI-RADS characteristics labeled by senior radiologists achieved 0.983 (95% confidential interval (CI) = 0.975–0.992) for benign and malignant breast cancer, while the diagnostic performance of the knowledge tensor constructed using the BI-RADS characteristics labeled by junior radiologists was only 0.849 (95% CI = 0.823–0.876). With the knowledge tensor fusion, the AUC is improved to 0.887 (95% CI = 0.864–0.909). Therefore, our proposed knowledge tensor can effectively help reduce the misclassification of BI-RADS characteristics by senior radiologists and, thus, improve the diagnostic performance of breast-ultrasound-assisted diagnosis. Full article
Show Figures

Figure 1

24 pages, 10745 KiB  
Article
Trustworthy Breast Ultrasound Image Semantic Segmentation Based on Fuzzy Uncertainty Reduction
by Kuan Huang, Yingtao Zhang, Heng-Da Cheng and Ping Xing
Healthcare 2022, 10(12), 2480; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare10122480 - 8 Dec 2022
Cited by 2 | Viewed by 1668
Abstract
Medical image semantic segmentation is essential in computer-aided diagnosis systems. It can separate tissues and lesions in the image and provide valuable information to radiologists and doctors. The breast ultrasound (BUS) images have advantages: no radiation, low cost, portable, etc. However, there are [...] Read more.
Medical image semantic segmentation is essential in computer-aided diagnosis systems. It can separate tissues and lesions in the image and provide valuable information to radiologists and doctors. The breast ultrasound (BUS) images have advantages: no radiation, low cost, portable, etc. However, there are two unfavorable characteristics: (1) the dataset size is often small due to the difficulty in obtaining the ground truths, and (2) BUS images are usually in poor quality. Trustworthy BUS image segmentation is urgent in breast cancer computer-aided diagnosis systems, especially for fully understanding the BUS images and segmenting the breast anatomy, which supports breast cancer risk assessment. The main challenge for this task is uncertainty in both pixels and channels of the BUS images. In this paper, we propose a Spatial and Channel-wise Fuzzy Uncertainty Reduction Network (SCFURNet) for BUS image semantic segmentation. The proposed architecture can reduce the uncertainty in the original segmentation frameworks. We apply the proposed method to four datasets: (1) a five-category BUS image dataset with 325 images, and (2) three BUS image datasets with only tumor category (1830 images in total). The proposed approach compares state-of-the-art methods such as U-Net with VGG-16, ResNet-50/ResNet-101, Deeplab, FCN-8s, PSPNet, U-Net with information extension, attention U-Net, and U-Net with the self-attention mechanism. It achieves 2.03%, 1.84%, and 2.88% improvements in the Jaccard index on three public BUS datasets, and 6.72% improvement in the tumor category and 4.32% improvement in the overall performance on the five-category dataset compared with that of the original U-shape network with ResNet-101 since it can handle the uncertainty effectively and efficiently. Full article
Show Figures

Figure 1

17 pages, 8667 KiB  
Article
Breast Cancer Classification by Using Multi-Headed Convolutional Neural Network Modeling
by Refat Khan Pathan, Fahim Irfan Alam, Suraiya Yasmin, Zuhal Y. Hamd, Hanan Aljuaid, Mayeen Uddin Khandaker and Sian Lun Lau
Healthcare 2022, 10(12), 2367; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare10122367 - 25 Nov 2022
Cited by 7 | Viewed by 2819
Abstract
Breast cancer is one of the most widely recognized diseases after skin cancer. Though it can occur in all kinds of people, it is undeniably more common in women. Several analytical techniques, such as Breast MRI, X-ray, Thermography, Mammograms, Ultrasound, etc., are utilized [...] Read more.
Breast cancer is one of the most widely recognized diseases after skin cancer. Though it can occur in all kinds of people, it is undeniably more common in women. Several analytical techniques, such as Breast MRI, X-ray, Thermography, Mammograms, Ultrasound, etc., are utilized to identify it. In this study, artificial intelligence was used to rapidly detect breast cancer by analyzing ultrasound images from the Breast Ultrasound Images Dataset (BUSI), which consists of three categories: Benign, Malignant, and Normal. The relevant dataset comprises grayscale and masked ultrasound images of diagnosed patients. Validation tests were accomplished for quantitative outcomes utilizing the exhibition measures for each procedure. The proposed framework is discovered to be effective, substantiating outcomes with only raw image evaluation giving a 78.97% test accuracy and masked image evaluation giving 81.02% test precision, which could decrease human errors in the determination cycle. Additionally, our described framework accomplishes higher accuracy after using multi-headed CNN with two processed datasets based on masked and original images, where the accuracy hopped up to 92.31% (±2) with a Mean Squared Error (MSE) loss of 0.05. This work primarily contributes to identifying the usefulness of multi-headed CNN when working with two different types of data inputs. Finally, a web interface has been made to make this model usable for non-technical personals. Full article
Show Figures

Figure 1

14 pages, 1970 KiB  
Article
ESTAN: Enhanced Small Tumor-Aware Network for Breast Ultrasound Image Segmentation
by Bryar Shareef, Aleksandar Vakanski, Phoebe E. Freer and Min Xian
Healthcare 2022, 10(11), 2262; https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare10112262 - 11 Nov 2022
Cited by 15 | Viewed by 2423
Abstract
Breast tumor segmentation is a critical task in computer-aided diagnosis (CAD) systems for breast cancer detection because accurate tumor size, shape, and location are important for further tumor quantification and classification. However, segmenting small tumors in ultrasound images is challenging due to the [...] Read more.
Breast tumor segmentation is a critical task in computer-aided diagnosis (CAD) systems for breast cancer detection because accurate tumor size, shape, and location are important for further tumor quantification and classification. However, segmenting small tumors in ultrasound images is challenging due to the speckle noise, varying tumor shapes and sizes among patients, and the existence of tumor-like image regions. Recently, deep learning-based approaches have achieved great success in biomedical image analysis, but current state-of-the-art approaches achieve poor performance for segmenting small breast tumors. In this paper, we propose a novel deep neural network architecture, namely the Enhanced Small Tumor-Aware Network (ESTAN), to accurately and robustly segment breast tumors. The Enhanced Small Tumor-Aware Network introduces two encoders to extract and fuse image context information at different scales, and utilizes row-column-wise kernels to adapt to the breast anatomy. We compare ESTAN and nine state-of-the-art approaches using seven quantitative metrics on three public breast ultrasound datasets, i.e., BUSIS, Dataset B, and BUSI. The results demonstrate that the proposed approach achieves the best overall performance and outperforms all other approaches on small tumor segmentation. Specifically, the Dice similarity coefficient (DSC) of ESTAN on the three datasets is 0.92, 0.82, and 0.78, respectively; and the DSC of ESTAN on the three datasets of small tumors is 0.89, 0.80, and 0.81, respectively. Full article
Show Figures

Figure 1

Back to TopTop