Medical Image Processing and Analysis Methods for Cancer Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: closed (20 June 2022) | Viewed by 13406

Special Issue Editors


E-Mail Website
Guest Editor
Istituto di Tecnologie Biomediche, Consiglio Nazionale delle Ricerche, Via Fratelli Cervi 93, 20090 Milan, Italy
Interests: medical image processing and analysis; radiomics; cancer imaging; magnetic resonance imaging; computed tomography

E-Mail Website
Guest Editor
Centro de Investigación y de Estudios Avanzados del IPN, Unidad Tamaulipas, Parque TECNOTAM, Ciudad Victoria 87130, Mexico
Interests: digital image analysis; pattern recognition; machine learning

Special Issue Information

Dear Colleagues,

We are pleased to announce the opening of a new Special Issue in the Applied Sciences journal.
Medical image processing and analysis have a relevant role in cancer diagnosis, prognosis, and treatment response. The recent improvements in image enhancement, registration, segmentation, and quantitative imaging, together with artificial intelligence and the availability of large databases, have led to more accurate classification and prediction models for computer-aided diagnosis. In addition, the combination of image-based features with other types of data (e.g., omics data), such as radiogenomics, has opened new research fields and gives novel insights into the relationship between tumor genotype and phenotype.
This Special Issue is focused on the most recent developments of biomedical image processing and analysis methods and their robustness evaluation for cancer applications, considering distinct imaging modalities (e.g., CT, MRI, X-ray, PET, SPECT, US) adopted for cancer detection, diagnosis, prognosis, and treatment response evaluation. Topics of interest will include (but will not be limited to):

  • Medical image enhancement, denoising, artifact correction, super-resolution;
  • Medical image segmentation and detection;
  • Medical image registration and contour propagation;
  • 3D organs visualization and reconstruction;
  • Quantitative imaging;
  • Radiomics, radiogenomics and dosiomics;
  • Medical image analysis with deep learning;
  • Explainable artificial intelligence for cancer;
  • Classification and prediction models using image-based features;
  • Mechanistic models of tumor growth, radiobiological models, and dose-maps analysis.

Dr. Elisa Scalco
Dr. Wilfrido Gómez-Flores
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Medical imaging
  • Cancer detection
  • Classification and prediction models in cancer
  • Image segmentation
  • Image registration
  • Radiomics
  • Deep learning
  • Quantitative imaging

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 962 KiB  
Article
The Impact of Segmentation Method and Target Lesion Selection on Radiomic Analysis of 18F-FDG PET Images in Diffuse Large B-Cell Lymphoma
by Francesca Botta, Mahila Ferrari, Sara Raimondi, Federica Corso, Giuliana Lo Presti, Saveria Mazzara, Lighea Simona Airò Farulla, Tommaso Radice, Anna Vanazzi, Enrico Derenzini, Laura Lavinia Travaini and Francesco Ceci
Appl. Sci. 2022, 12(19), 9678; https://0-doi-org.brum.beds.ac.uk/10.3390/app12199678 - 27 Sep 2022
Cited by 1 | Viewed by 1312
Abstract
Radiomic analysis of 18F[FDG] PET/CT images might identify predictive imaging biomarkers, however, the reproducibility of this quantitative approach might depend on the methodology adopted for image analysis. This retrospective study investigates the impact of PET segmentation method and the selection of different [...] Read more.
Radiomic analysis of 18F[FDG] PET/CT images might identify predictive imaging biomarkers, however, the reproducibility of this quantitative approach might depend on the methodology adopted for image analysis. This retrospective study investigates the impact of PET segmentation method and the selection of different target lesions on the radiomic analysis of baseline 18F[FDG] PET/CT images in a population of newly diagnosed diffuse large B-cell lymphoma (DLBCL) patients. The whole tumor burden was segmented on PET images applying six methods: (1) 2.5 standardized uptake value (SUV) threshold; (2) 25% maximum SUV (SUVmax) threshold; (3) 42% SUVmax threshold; (4) 1.3∙liver uptake threshold; (5) intersection among 1, 2, 4; and (6) intersection among 1, 3, 4. For each method, total metabolic tumor volume (TMTV) and whole-body total lesion glycolysis (WTLG) were assessed, and their association with survival outcomes (progression-free survival PFS and overall survival OS) was investigated. Methods 1 and 2 provided stronger associations and were selected for the next steps. Radiomic analysis was then performed on two target lesions for each patient: the one with the highest SUV and the largest one. Fifty-three radiomic features were extracted, and radiomic scores to predict PFS and OS were obtained. Two proportional-hazard regression Cox models for PFS and OS were developed: (1) univariate radiomic models based on radiomic score; and (2) multivariable clinical–radiomic model including radiomic score and clinical/diagnostic parameters (IPI score, SUVmax, TMTV, WTLG, lesion volume). The models were created in the four scenarios obtained by varying the segmentation method and/or the target lesion; the models’ performances were compared (C-index). In all scenarios, the radiomic score was significantly associated with PFS and OS both at univariate and multivariable analysis (p < 0.001), in the latter case in association with the IPI score. When comparing the models’ performances in the four scenarios, the C-indexes agreed within the confidence interval. C-index ranges were 0.79–0.81 and 0.80–0.83 for PFS radiomic and clinical–radiomic models; 0.82–0.87 and 0.83–0.90 for OS radiomic and clinical–radiomic models. In conclusion, the selection of either between two PET segmentation methods and two target lesions for radiomic analysis did not significantly affect the performance of the prognostic models built on radiomic and clinical data of DLBCL patients. These results prompt further investigation of the proposed methodology on a validation dataset. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis Methods for Cancer Applications)
Show Figures

Figure 1

12 pages, 4392 KiB  
Article
Discrimination of Tumor Texture Based on MRI Radiomic Features: Is There a Volume Threshold? A Phantom Study
by João Santinha, Linda Bianchini, Mário Figueiredo, Celso Matos, Alessandro Lascialfari, Nikolaos Papanikolaou, Marta Cremonesi, Barbara A. Jereczek-Fossa, Francesca Botta and Daniela Origgi
Appl. Sci. 2022, 12(11), 5465; https://0-doi-org.brum.beds.ac.uk/10.3390/app12115465 - 27 May 2022
Cited by 2 | Viewed by 1735
Abstract
Radiomics is emerging as a promising tool to extract quantitative biomarkers—called radiomic features—from medical images, potentially contributing to the improvement in diagnosis and treatment of oncological patients. However, technical limitations might impair the reliability of radiomic features and their ability to quantify clinically [...] Read more.
Radiomics is emerging as a promising tool to extract quantitative biomarkers—called radiomic features—from medical images, potentially contributing to the improvement in diagnosis and treatment of oncological patients. However, technical limitations might impair the reliability of radiomic features and their ability to quantify clinically relevant tissue properties. Among these, sampling the image signal in a too-small region can reduce the ability to discriminate tissues with different properties. However, a volume threshold guaranteeing a reliable analysis, which might vary according to the imaging modality and clinical scenario, has not been assessed yet. In this study, an MRI phantom specifically developed for radiomic investigation of gynecological malignancies was used to explore how the ability of radiomic features to discriminate different image textures varies with the volume of the analyzed region. The phantom, embedding inserts with different textures, was scanned on two 1.5T and one 3T scanners, each using the T2-weighted sequence of the clinical protocol implemented for gynecological studies. Within each of the three inserts, six cylindrical regions were drawn with volumes ranging from 0.8 cm3 to 29.8 cm3, and 944 radiomic features were extracted from both original images and from images processed with different filters. For each scanner, the ability of each feature to discriminate the different textures was quantified. Despite differences observed among the scanner models, the overall percentage of discriminative features across scanners was >70%, with the smallest volume having the lowest percentage of discriminative features for all scanners. Stratification by feature class, still aggregating data for original and filtered images, showed statistical significance for the association between the percentage of discriminative features with VOI sizes for features classes GLCM, GLDM, and GLSZM on the first 1.5T scanner and for first-order and GLSZM classes on the second 1.5T scanner. Poorer results in terms of features’ discriminative ability were found for the 3T scanner. Focusing on original images only, the analysis of discriminative features stratified by feature class showed that the first-order and GLCM were robust to VOI size variations (>85% discriminative features for all sizes), while for the 1.5T scanners, the GLSZM and NGTDM feature classes showed a percentage of discriminative features >80% only for volumes no smaller than 3.3 cm3, and equal or larger than 7.4 cm3 for the GLRLM. As for the 3T scanner, only the GLSZM showed a percentage of discriminative features >80% for all volume sizes above 3.3 cm3. Analogous considerations were obtained for each filter, providing useful indications for feature selection in this clinical case. Similar studies should be replicated with suitably adapted phantoms to derive useful data for other clinical scenarios and imaging modalities. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis Methods for Cancer Applications)
Show Figures

Figure 1

13 pages, 3652 KiB  
Article
A Clustering Approach to Improve IntraVoxel Incoherent Motion Maps from DW-MRI Using Conditional Auto-Regressive Bayesian Model
by Elisa Scalco, Alfonso Mastropietro, Giovanna Rizzo and Ettore Lanzarone
Appl. Sci. 2022, 12(4), 1907; https://0-doi-org.brum.beds.ac.uk/10.3390/app12041907 - 11 Feb 2022
Cited by 2 | Viewed by 1183
Abstract
The Intra-Voxel Incoherent Motion (IVIM) model allows to estimate water diffusion and perfusion-related coefficients in biological tissues using diffusion weighted MR images. Among the available approaches to fit the IVIM bi-exponential decay, a segmented Bayesian algorithm with a Conditional Auto-Regressive (CAR) prior spatial [...] Read more.
The Intra-Voxel Incoherent Motion (IVIM) model allows to estimate water diffusion and perfusion-related coefficients in biological tissues using diffusion weighted MR images. Among the available approaches to fit the IVIM bi-exponential decay, a segmented Bayesian algorithm with a Conditional Auto-Regressive (CAR) prior spatial regularization has been recently proposed to produce more reliable coefficient estimation. However, the CAR spatial regularization can generate inaccurate coefficient estimation, especially at the interfaces between different tissues. To overcome this problem, the segmented CAR model was coupled in this work with a k-means clustering approach, to separate different tissues and exclude voxels from other regions in the CAR prior specification. The proposed approach was compared with the original Bayesian CAR method without clustering and with a state-of-the-art Bayesian approach without CAR. The approaches were tested and compared on simulated images by calculating the estimation error and the coefficient of variation (CV). Furthermore, the proposed method was applied to some illustrative real images of oncologic patients. On simulated images, the proposed innovation reduced the average error of 47%, 21% and 58% for D, f and D*, respectively, compared to the state-of-the-art Bayesian approach, and of 48% and 34% for D and f, respectively, compared to the original CAR, while it achieved the same error for D*. The clustering approach was also able to consistently reduce the CV for each coefficient. On real images, the novel approach did not alter the IVIM maps obtained by the original CAR method, with the advantage of reducing their typical blotchy appearance at the boundaries. The proposed approach represents a valuable improvement over the state-of-the-art Bayesian CAR method and provides more reliable IVIM coefficient estimation, and is less sensitive to bias and inconsistency at tissue/tissue and tissue/background interfaces. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis Methods for Cancer Applications)
Show Figures

Figure 1

28 pages, 3512 KiB  
Article
Breast Cancer Detection Using Mammogram Images with Improved Multi-Fractal Dimension Approach and Feature Fusion
by Dilovan Asaad Zebari, Dheyaa Ahmed Ibrahim, Diyar Qader Zeebaree, Mazin Abed Mohammed, Habibollah Haron, Nechirvan Asaad Zebari, Robertas Damaševičius and Rytis Maskeliūnas
Appl. Sci. 2021, 11(24), 12122; https://0-doi-org.brum.beds.ac.uk/10.3390/app112412122 - 20 Dec 2021
Cited by 50 | Viewed by 8312
Abstract
Breast cancer detection using mammogram images at an early stage is an important step in disease diagnostics. We propose a new method for the classification of benign or malignant breast cancer from mammogram images. Hybrid thresholding and the machine learning method are used [...] Read more.
Breast cancer detection using mammogram images at an early stage is an important step in disease diagnostics. We propose a new method for the classification of benign or malignant breast cancer from mammogram images. Hybrid thresholding and the machine learning method are used to derive the region of interest (ROI). The derived ROI is then separated into five different blocks. The wavelet transform is applied to suppress noise from each produced block based on BayesShrink soft thresholding by capturing high and low frequencies within different sub-bands. An improved fractal dimension (FD) approach, called multi-FD (M-FD), is proposed to extract multiple features from each denoised block. The number of features extracted is then reduced by a genetic algorithm. Five classifiers are trained and used with the artificial neural network (ANN) to classify the extracted features from each block. Lastly, the fusion process is performed on the results of five blocks to obtain the final decision. The proposed approach is tested and evaluated on four benchmark mammogram image datasets (MIAS, DDSM, INbreast, and BCDR). We present the results of single- and double-dataset evaluations. Only one dataset is used for training and testing in the single-dataset evaluation, whereas two datasets (one for training, and one for testing) are used in the double-dataset evaluation. The experiment results show that the proposed method yields better results on the INbreast dataset in the single-dataset evaluation, whilst better results are obtained on the remaining datasets in the double-dataset evaluation. The proposed approach outperforms other state-of-the-art models on the Mini-MIAS dataset. Full article
(This article belongs to the Special Issue Medical Image Processing and Analysis Methods for Cancer Applications)
Show Figures

Figure 1

Back to TopTop