AI and Medical Imaging in Breast Disease

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Medical Imaging and Theranostics".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 18876

Special Issue Editors


E-Mail Website
Guest Editor
Department of Radiology, University of Chicago, Chicago, IL, USA
Interests: breast cancer image analysis; radiomics; deep learning; risk assessment; high-risk screening; detection; diagnosis; prognosis; precision medicine

E-Mail Website
Guest Editor
Department of Radiology, Michigan Medicine, University of Michigan, Ann Arbor, MI 48109, USA
Interests: computer-aided diagnosis; neural networks; predictive models; image processing; medical imaging
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
Interests: breast cancer; breast cancer MRI; neoadjuvant chemotherapy; biomedical image analysis; quantitative imaging biomarkers; pattern recognition; Identification, characterization and validation of imaging biomarkers; evaluation of genotype to phenotype associations via imaging; integrated diagnostics; risk prediction; personalized screening; prognostication and treatment of breast cancer

E-Mail Website
Guest Editor
Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
Interests: breast imaging; machine learning; deep learning; radiomics; computer-aided diagnosis; quantitative imaging biomarkers; tumor heterogeneity

E-Mail Website
Guest Editor
Department of Radiology, School of Medicine Imaging Research Center, University of Pittsburgh, Pittsburgh, PA, USA
Interests: AI and Machine Learning in medical imaging; breast imaging; clinical/translational study

Special Issue Information

Dear Colleagues, 

For multiple decades, artificial intelligence (AI) has been of interest to improve the practice of breast imaging tasks, such as detection and diagnosis. Computer-aided detection of breast cancer on mammograms was among the first automated medical image analysis methods approved for clinical use. New techniques such as radiomics and deep learning are poised to transform medical imaging-based healthcare with developments fueled by improvements in, and availability of, multiple imaging modalities, larger datasets, and better computing resources. Deep learning revolutionized analysis methods by enabling imaging feature learning on its own rather than relying on human-engineered radiomics features. There are many ongoing AI research studies on methodology, evaluation, and applications of various breast imaging modalities (including full-field digital mammography, breast tomosynthesis, ultrasound, and MRI). AI algorithms will impact all aspects of imaging-based breast cancer patient care with tasks including detection, diagnosis, prognosis, treatment response assessment, and risk assessment. However, the news is not all good. On the one hand, the current AI revolution resulted in a reduction in technical barriers for researchers to start developing new AI methods, or use existing models in new applications, because many deep learning architectures and pre-trained weights are available as open source. On the other hand, many factors hamper translation to clinical practice, including a lack of explainability and interpretability, lack of standardization, lack of reproducibility and generalizability (due to, e.g., small single-institution datasets, improper training/testing protocols, or incorrect use of statistics), model aging, and bias.

This Special Issue is intended to provide an overview of the development of AI in clinical breast imaging. The primary aim is to provide an overview of exiting new applications of both ‘conventional’ methods—using human-engineered radiomics features—and deep learning AI algorithms involving all aspects of imaging-based patient care for breast disease. The secondary aim is to provide a resource for researchers to help to identify and mitigate potential pitfalls in the translation of their breast imaging AI applications to clinical practice.

Dr. Karen Drukker
Prof. Dr. Lubomir Hadjiiski
Dr. Despina Kontos
Dr. Marco Caballo
Dr. Shandong Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • breast cancer, breast disease
  • screening
  • diagnosis
  • computer-aided detection
  • computer-aided diagnosis
  • radiomics
  • radiogenomics
  • multi-omics
  • machine learning/deep learning
  • prognosis
  • treatment response assessment
  • risk assessment
  • survival analysis
  • tumor heterogeneity
  • tumor subtyping
  • microenvironment
  • clinical translation
  • radiology-pathology correlation

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

14 pages, 2588 KiB  
Article
Quantitative Assessment of Breast-Tumor Stiffness Using Shear-Wave Elastography Histograms
by Ismini Papageorgiou, Nektarios A. Valous, Stathis Hadjidemetriou, Ulf Teichgräber and Ansgar Malich
Diagnostics 2022, 12(12), 3140; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12123140 - 13 Dec 2022
Cited by 2 | Viewed by 1763
Abstract
Purpose: Shear-wave elastography (SWE) measures tissue elasticity using ultrasound waves. This study proposes a histogram-based SWE analysis to improve breast malignancy detection. Methods: N = 22/32 (patients/tumors) benign and n = 51/64 malignant breast tumors with histological ground truth. Colored SWE heatmaps were [...] Read more.
Purpose: Shear-wave elastography (SWE) measures tissue elasticity using ultrasound waves. This study proposes a histogram-based SWE analysis to improve breast malignancy detection. Methods: N = 22/32 (patients/tumors) benign and n = 51/64 malignant breast tumors with histological ground truth. Colored SWE heatmaps were adjusted to a 0–180 kPa scale. Normalized, 250-binned RGB histograms were used as image descriptors based on skewness and area under curve (AUC). The histogram method was compared to conventional SWE metrics, such as (1) the qualitative 5-point scale classification and (2) average stiffness (SWEavg)/maximal tumor stiffness (SWEmax) within the tumor B-mode boundaries. Results: The SWEavg and SWEmax did not discriminate malignant lesions in this database, p > 0.05, rank sum test. RGB histograms, however, differed between malignant and benign tumors, p < 0.001, Kolmogorov–Smirnoff test. The AUC analysis of histograms revealed the reduction of soft-tissue components as a significant SWE biomarker (p = 0.03, rank sum). The diagnostic accuracy of the suggested method is still low (Se = 0.30 for Se = 0.90) and a subject for improvement in future studies. Conclusions: Histogram-based SWE quantitation improved the diagnostic accuracy for malignancy compared to conventional average SWE metrics. The sensitivity is a subject for improvement in future studies. Full article
(This article belongs to the Special Issue AI and Medical Imaging in Breast Disease)
Show Figures

Figure 1

20 pages, 18331 KiB  
Article
Breast Cancer Detection Using Automated Segmentation and Genetic Algorithms
by María de la Luz Escobar, José I. De la Rosa, Carlos E. Galván-Tejada, Jorge I. Galvan-Tejada, Hamurabi Gamboa-Rosales, Daniel de la Rosa Gomez, Huitzilopoztli Luna-García and José M. Celaya-Padilla
Diagnostics 2022, 12(12), 3099; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12123099 - 08 Dec 2022
Viewed by 1804
Abstract
Breast cancer is the most common cancer among women worldwide, after lung cancer. However, early detection of breast cancer can help to reduce death rates in breast cancer patients and also prevent cancer from spreading to other parts of the body. This work [...] Read more.
Breast cancer is the most common cancer among women worldwide, after lung cancer. However, early detection of breast cancer can help to reduce death rates in breast cancer patients and also prevent cancer from spreading to other parts of the body. This work proposes a new method to design a bio-marker integrating Bayesian predictive models, pyRadiomics System and genetic algorithms to classify the benign and malignant lesions. The method allows one to evaluate two types of images: The radiologist-segmented lesion, and a novel automated breast cancer detection by the analysis of the whole breast. The results demonstrate only a difference of 12% of effectiveness for the cases of calcification between the radiologist generated segmentation and the automatic whole breast analysis, and a 25% of difference between the lesion and the breast for the cases of masses. In addition, our approach was compared against other proposed methods in the literature, providing an AUC = 0.86 for the analysis of images with lesions in breast calcification, and AUC = 0.96 for masses. Full article
(This article belongs to the Special Issue AI and Medical Imaging in Breast Disease)
Show Figures

Figure 1

15 pages, 4430 KiB  
Article
Tensor-Based Learning for Detecting Abnormalities on Digital Mammograms
by Ioannis N. Tzortzis, Agapi Davradou, Ioannis Rallis, Maria Kaselimi, Konstantinos Makantasis, Anastasios Doulamis and Nikolaos Doulamis
Diagnostics 2022, 12(10), 2389; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12102389 - 01 Oct 2022
Cited by 2 | Viewed by 1390
Abstract
In this study, we propose a tensor-based learning model to efficiently detect abnormalities on digital mammograms. Due to the fact that the availability of medical data is limited and often restricted by GDPR (general data protection regulation) compliance, the need for more sophisticated [...] Read more.
In this study, we propose a tensor-based learning model to efficiently detect abnormalities on digital mammograms. Due to the fact that the availability of medical data is limited and often restricted by GDPR (general data protection regulation) compliance, the need for more sophisticated and less data-hungry approaches is urgent. Accordingly, our proposed artificial intelligence framework utilizes the canonical polyadic decomposition to decrease the trainable parameters of the wrapped Rank-R FNN model, leading to efficient learning using small amounts of data. Our model was evaluated on the open source digital mammographic database INBreast and compared with state-of-the-art models in this domain. The experimental results show that the proposed solution performs well in comparison with the other deep learning models, such as AlexNet and SqueezeNet, achieving 90% ± 4% accuracy and an F1 score of 84% ± 5%. Additionally, our framework tends to attain more robust performance with small numbers of data and is computationally lighter for inference purposes, due to the small number of trainable parameters. Full article
(This article belongs to the Special Issue AI and Medical Imaging in Breast Disease)
Show Figures

Figure 1

12 pages, 5176 KiB  
Article
Exploiting the Dixon Method for a Robust Breast and Fibro-Glandular Tissue Segmentation in Breast MRI
by Riccardo Samperna, Nikita Moriakov, Nico Karssemeijer, Jonas Teuwen and Ritse M. Mann
Diagnostics 2022, 12(7), 1690; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12071690 - 11 Jul 2022
Viewed by 1916
Abstract
Automatic breast and fibro-glandular tissue (FGT) segmentation in breast MRI allows for the efficient and accurate calculation of breast density. The U-Net architecture, either 2D or 3D, has already been shown to be effective at addressing the segmentation problem in breast MRI. However, [...] Read more.
Automatic breast and fibro-glandular tissue (FGT) segmentation in breast MRI allows for the efficient and accurate calculation of breast density. The U-Net architecture, either 2D or 3D, has already been shown to be effective at addressing the segmentation problem in breast MRI. However, the lack of publicly available datasets for this task has forced several authors to rely on internal datasets composed of either acquisitions without fat suppression (WOFS) or with fat suppression (FS), limiting the generalization of the approach. To solve this problem, we propose a data-centric approach, efficiently using the data available. By collecting a dataset of T1-weighted breast MRI acquisitions acquired with the use of the Dixon method, we train a network on both T1 WOFS and FS acquisitions while utilizing the same ground truth segmentation. Using the “plug-and-play” framework nnUNet, we achieve, on our internal test set, a Dice Similarity Coefficient (DSC) of 0.96 and 0.91 for WOFS breast and FGT segmentation and 0.95 and 0.86 for FS breast and FGT segmentation, respectively. On an external, publicly available dataset, a panel of breast radiologists rated the quality of our automatic segmentation with an average of 3.73 on a four-point scale, with an average percentage agreement of 67.5%. Full article
(This article belongs to the Special Issue AI and Medical Imaging in Breast Disease)
Show Figures

Figure 1

20 pages, 2852 KiB  
Article
Diagnostic Accuracy of Machine Learning Models on Mammography in Breast Cancer Classification: A Meta-Analysis
by Tengku Muhammad Hanis, Md Asiful Islam and Kamarul Imran Musa
Diagnostics 2022, 12(7), 1643; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12071643 - 05 Jul 2022
Cited by 6 | Viewed by 2356
Abstract
In this meta-analysis, we aimed to estimate the diagnostic accuracy of machine learning models on digital mammograms and tomosynthesis in breast cancer classification and to assess the factors affecting its diagnostic accuracy. We searched for related studies in Web of Science, Scopus, PubMed, [...] Read more.
In this meta-analysis, we aimed to estimate the diagnostic accuracy of machine learning models on digital mammograms and tomosynthesis in breast cancer classification and to assess the factors affecting its diagnostic accuracy. We searched for related studies in Web of Science, Scopus, PubMed, Google Scholar and Embase. The studies were screened in two stages to exclude the unrelated studies and duplicates. Finally, 36 studies containing 68 machine learning models were included in this meta-analysis. The area under the curve (AUC), hierarchical summary receiver operating characteristics (HSROC) curve, pooled sensitivity and pooled specificity were estimated using a bivariate Reitsma model. Overall AUC, pooled sensitivity and pooled specificity were 0.90 (95% CI: 0.85–0.90), 0.83 (95% CI: 0.78–0.87) and 0.84 (95% CI: 0.81–0.87), respectively. Additionally, the three significant covariates identified in this study were country (p = 0.003), source (p = 0.002) and classifier (p = 0.016). The type of data covariate was not statistically significant (p = 0.121). Additionally, Deeks’ linear regression test indicated that there exists a publication bias in the included studies (p = 0.002). Thus, the results should be interpreted with caution. Full article
(This article belongs to the Special Issue AI and Medical Imaging in Breast Disease)
Show Figures

Figure 1

14 pages, 3762 KiB  
Article
BI-RADS-Based Classification of Mammographic Soft Tissue Opacities Using a Deep Convolutional Neural Network
by Albin Sabani, Anna Landsmann, Patryk Hejduk, Cynthia Schmidt, Magda Marcon, Karol Borkowski, Cristina Rossi, Alexander Ciritsis and Andreas Boss
Diagnostics 2022, 12(7), 1564; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12071564 - 28 Jun 2022
Cited by 4 | Viewed by 2659
Abstract
The aim of this study was to investigate the potential of a machine learning algorithm to classify breast cancer solely by the presence of soft tissue opacities in mammograms, independent of other morphological features, using a deep convolutional neural network (dCNN). Soft tissue [...] Read more.
The aim of this study was to investigate the potential of a machine learning algorithm to classify breast cancer solely by the presence of soft tissue opacities in mammograms, independent of other morphological features, using a deep convolutional neural network (dCNN). Soft tissue opacities were classified based on their radiological appearance using the ACR BI-RADS atlas. We included 1744 mammograms from 438 patients to create 7242 icons by manual labeling. The icons were sorted into three categories: “no opacities” (BI-RADS 1), “probably benign opacities” (BI-RADS 2/3) and “suspicious opacities” (BI-RADS 4/5). A dCNN was trained (70% of data), validated (20%) and finally tested (10%). A sliding window approach was applied to create colored probability maps for visual impression. Diagnostic performance of the dCNN was compared to human readout by experienced radiologists on a “real-world” dataset. The accuracies of the models on the test dataset ranged between 73.8% and 89.8%. Compared to human readout, our dCNN achieved a higher specificity (100%, 95% CI: 85.4–100%; reader 1: 86.2%, 95% CI: 67.4–95.5%; reader 2: 79.3%, 95% CI: 59.7–91.3%), and the sensitivity (84.0%, 95% CI: 63.9–95.5%) was lower than that of human readers (reader 1:88.0%, 95% CI: 67.4–95.4%; reader 2:88.0%, 95% CI: 67.7–96.8%). In conclusion, a dCNN can be used for the automatic detection as well as the standardized and observer-independent classification of soft tissue opacities in mammograms independent of the presence of microcalcifications. Human decision making in accordance with the BI-RADS classification can be mimicked by artificial intelligence. Full article
(This article belongs to the Special Issue AI and Medical Imaging in Breast Disease)
Show Figures

Figure 1

14 pages, 1800 KiB  
Article
Transformers Improve Breast Cancer Diagnosis from Unregistered Multi-View Mammograms
by Xuxin Chen, Ke Zhang, Neman Abdoli, Patrik W. Gilley, Ximin Wang, Hong Liu, Bin Zheng and Yuchen Qiu
Diagnostics 2022, 12(7), 1549; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12071549 - 25 Jun 2022
Cited by 14 | Viewed by 3303
Abstract
Deep convolutional neural networks (CNNs) have been widely used in various medical imaging tasks. However, due to the intrinsic locality of convolution operations, CNNs generally cannot model long-range dependencies well, which are important for accurately identifying or mapping corresponding breast lesion features computed [...] Read more.
Deep convolutional neural networks (CNNs) have been widely used in various medical imaging tasks. However, due to the intrinsic locality of convolution operations, CNNs generally cannot model long-range dependencies well, which are important for accurately identifying or mapping corresponding breast lesion features computed from unregistered multiple mammograms. This motivated us to leverage the architecture of Multi-view Vision Transformers to capture long-range relationships of multiple mammograms from the same patient in one examination. For this purpose, we employed local transformer blocks to separately learn patch relationships within four mammograms acquired from two-view (CC/MLO) of two-side (right/left) breasts. The outputs from different views and sides were concatenated and fed into global transformer blocks, to jointly learn patch relationships between four images representing two different views of the left and right breasts. To evaluate the proposed model, we retrospectively assembled a dataset involving 949 sets of mammograms, which included 470 malignant cases and 479 normal or benign cases. We trained and evaluated the model using a five-fold cross-validation method. Without any arduous preprocessing steps (e.g., optimal window cropping, chest wall or pectoral muscle removal, two-view image registration, etc.), our four-image (two-view-two-side) transformer-based model achieves case classification performance with an area under ROC curve (AUC = 0.818 ± 0.039), which significantly outperforms AUC = 0.784 ± 0.016 achieved by the state-of-the-art multi-view CNNs (p = 0.009). It also outperforms two one-view-two-side models that achieve AUC of 0.724 ± 0.013 (CC view) and 0.769 ± 0.036 (MLO view), respectively. The study demonstrates the potential of using transformers to develop high-performing computer-aided diagnosis schemes that combine four mammograms. Full article
(This article belongs to the Special Issue AI and Medical Imaging in Breast Disease)
Show Figures

Figure 1

Other

Jump to: Research

19 pages, 4297 KiB  
Systematic Review
A Systematic Review of Application Progress on Machine Learning-Based Natural Language Processing in Breast Cancer over the Past 5 Years
by Chengtai Li, Ying Weng, Yiming Zhang and Boding Wang
Diagnostics 2023, 13(3), 537; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13030537 - 01 Feb 2023
Cited by 2 | Viewed by 2231
Abstract
Artificial intelligence (AI) has been steadily developing in the medical field in the past few years, and AI-based applications have advanced cancer diagnosis. Breast cancer has a massive amount of data in oncology. There has been a high level of research enthusiasm to [...] Read more.
Artificial intelligence (AI) has been steadily developing in the medical field in the past few years, and AI-based applications have advanced cancer diagnosis. Breast cancer has a massive amount of data in oncology. There has been a high level of research enthusiasm to apply AI techniques to assist in breast cancer diagnosis and improve doctors’ efficiency. However, the wise utilization of tedious breast cancer-related medical care is still challenging. Over the past few years, AI-based NLP applications have been increasingly proposed in breast cancer. In this systematic review, we conduct the review using preferred reporting items for systematic reviews and meta-analyses (PRISMA) and investigate the recent five years of literature in natural language processing (NLP)-based AI applications. This systematic review aims to uncover the recent trends in this area, close the research gap, and help doctors better understand the NLP application pipeline. We first conduct an initial literature search of 202 publications from Scopus, Web of Science, PubMed, Google Scholar, and the Association for Computational Linguistics (ACL) Anthology. Then, we screen the literature based on inclusion and exclusion criteria. Next, we categorize and analyze the advantages and disadvantages of the different machine learning models. We also discuss the current challenges, such as the lack of a public dataset. Furthermore, we suggest some promising future directions, including semi-supervised learning, active learning, and transfer learning. Full article
(This article belongs to the Special Issue AI and Medical Imaging in Breast Disease)
Show Figures

Figure 1

Back to TopTop