Recent Advances in Biomedical Image Processing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: closed (30 September 2021) | Viewed by 33441

Special Issue Editors


E-Mail Website
Guest Editor
VISILAB, University of Castilla-La Mancha, E.T.S.I. Industriales, Avda Camilo Jose Cela s/n, 13071 Ciudad Real, Spain
Interests: digital pathology; biomedical image processing; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
E.T.S. Ingenieros Industriales, VISILAB Grupo de Visión y Sistemas Inteligentes, University of Castilla–La Mancha, 13071 Ciudad Real, Spain
Interests: computer vision; machine learning; pattern recognition; image analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We invite submissions exploring cutting-edge research and recent developments in the field of biomedical image processing.

Advances in biomedical imaging, including digital radiography; X-ray computed tomography (CT); nuclear (positron emission tomography—PET); ultrasound; optical and magnetic resonance imaging (MRI); as well as a variety of new microscopies, including whole slide imaging (WSI) in digital pathology, have resulted in a range of research and clinical analysis methods. Imaging can provide unique information and quantitative descriptions of tissue composition, morphology, as well as biological structures and functions.

This information may give new insights into the causes of different diseases and presents enormous opportunities to develop and test new and more effective treatments that may revolutionize patient care. The challenge is to exploit this information effectively in order to process and model all the acquired data.

This Special Issue will present state-of-the-art processing techniques to analyze this information coming from different biomedical image sources. 

Dr. Gloria Bueno
Dr. Noelia Vallez
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • CT image processing
  • MRI processing
  • Nuclear image processing
  • Ultrasound image processing
  • Mammography image processing
  • Digital pathology
  • Computational microscopy in biomedicine
  • Optical image processing
  • Multispectral image processing
  • Nano-imaging processing in biomedicine
  • Artificial intelligence in biomedical imaging.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

16 pages, 24430 KiB  
Article
On the Reliability of CNNs in Clinical Practice: A Computer-Aided Diagnosis System Case Study
by Andrea Loddo and Lorenzo Putzu
Appl. Sci. 2022, 12(7), 3269; https://0-doi-org.brum.beds.ac.uk/10.3390/app12073269 - 23 Mar 2022
Cited by 7 | Viewed by 1588
Abstract
Leukocytes classification is essential to assess their number and status since they are the body’s first defence against infection and disease. Automation of the process can reduce the laborious manual process of review and diagnosis by operators and has been the subject of [...] Read more.
Leukocytes classification is essential to assess their number and status since they are the body’s first defence against infection and disease. Automation of the process can reduce the laborious manual process of review and diagnosis by operators and has been the subject of study for at least two decades. Most computer-aided systems exploit convolutional neural networks for classification purposes without any intermediate step to produce an accurate classification. This work explores the current limitations of deep learning-based methods applied to medical blood smear data. In particular, we consider leukocyte analysis oriented towards leukaemia prediction as a case study. In particular, we aim to demonstrate that a single classification step can undoubtedly lead to incorrect predictions or, worse, to correct predictions obtained with wrong indicators provided by the images. By generating new synthetic leukocyte data, it is possible to demonstrate that the inclusion of a fine-grained method, such as detection or segmentation, before classification is essential to allow the network to understand the adequate information on individual white blood cells correctly. The effectiveness of this study is thoroughly analysed and quantified through a series of experiments on a public data set of blood smears taken under a microscope. Experimental results show that residual networks perform statistically better in this scenario, even though they make correct predictions with incorrect information. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Image Processing)
Show Figures

Figure 1

19 pages, 10071 KiB  
Article
Health Risk Detection and Classification Model Using Multi-Model-Based Image Channel Expansion and Visual Pattern Standardization
by Chang-Min Kim, Ellen J. Hong, Kyungyong Chung and Roy C. Park
Appl. Sci. 2021, 11(18), 8621; https://0-doi-org.brum.beds.ac.uk/10.3390/app11188621 - 16 Sep 2021
Cited by 1 | Viewed by 1699
Abstract
Although mammography is an effective screening method for early detection of breast cancer, it is also difficult for experts to use since it requires a high level of sensitivity and expertise. A computer-aided detection system was introduced to improve the detection accuracy of [...] Read more.
Although mammography is an effective screening method for early detection of breast cancer, it is also difficult for experts to use since it requires a high level of sensitivity and expertise. A computer-aided detection system was introduced to improve the detection accuracy of breast cancer in mammography, which is difficult to read. In addition, research to find lesions in mammography images using artificial intelligence has been actively conducted in recent days. However, the images generally used for breast cancer diagnosis are high-resolution and thus require high-spec equipment and a significant amount of time and money to learn and recognize the images and process calculations. This can lower the accuracy of the diagnosis since it depends on the performance of the equipment. To solve this problem, this paper will propose a health risk detection and classification model using multi-model-based image channel expansion and visual pattern shaping. The proposed method expands the channels of breast ultrasound images and detects tumors quickly and accurately through the YOLO model. In order to reduce the amount of computation to enable rapid diagnosis of the detected tumors, the model reduces the dimensions of the data by normalizing the visual information and use them as an input for the RNN model to diagnose breast cancer. When the channels were expanded through the proposed brightness smoothing and visual pattern shaping, the accuracy was the highest at 94.9%. Based on the images generated, the study evaluated the breast cancer diagnosis performance. The results showed that the accuracy of the proposed model was 97.3%, CRNN 95.2%, VGG 93.6%, AlexNet 62.9%, and GoogleNet 75.3%, confirming that the proposed model had the best performance. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Image Processing)
Show Figures

Figure 1

15 pages, 947 KiB  
Article
Detection and Model of Thermal Traces Left after Aggressive Behavior of Laboratory Rodents
by Magdalena Mazur-Milecka, Jacek Ruminski, Wojciech Glac and Natalia Glowacka
Appl. Sci. 2021, 11(14), 6644; https://0-doi-org.brum.beds.ac.uk/10.3390/app11146644 - 20 Jul 2021
Viewed by 1586
Abstract
Automation of complex social behavior analysis of experimental animals would allow for faster, more accurate and reliable research results in many biological, pharmacological, and medical fields. However, there are behaviors that are not only difficult to detect for the computer, but also for [...] Read more.
Automation of complex social behavior analysis of experimental animals would allow for faster, more accurate and reliable research results in many biological, pharmacological, and medical fields. However, there are behaviors that are not only difficult to detect for the computer, but also for the human observer. Here, we present an analysis of the method for identifying aggressive behavior in thermal images by detecting traces of saliva left on the animals’ fur after a bite, nape attack, or grooming. We have checked the detection capabilities using simulations of social test conditions inspired by real observations and measurements. Detection of simulated traces different in size and temperature on single original frame revealed the dependence of the parameters of commonly used corner detectors (R score, ranking) on the parameters of the traces. We have also simulated temperature of saliva changes in time and proved that the detection time does not affect the correctness of the approximation of the observed process. Furthermore, tracking the dynamics of temperature changes of these traces allows to conclude about the exact moment of the aggressive action. In conclusion, the proposed algorithm together with thermal imaging provides additional data necessary to automate the analysis of social behavior in rodents. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Image Processing)
Show Figures

Figure 1

14 pages, 7036 KiB  
Article
Digital Histology by Phase Imaging Specific Biomarkers for Human Tumoral Tissues Discrimination
by José Luis Ganoza-Quintana, Félix Fanjul-Vélez and José Luis Arce-Diego
Appl. Sci. 2021, 11(13), 6142; https://0-doi-org.brum.beds.ac.uk/10.3390/app11136142 - 01 Jul 2021
Cited by 5 | Viewed by 2241
Abstract
Histology is the diagnosis gold standard. Conventional biopsy presents artifacts, delays, or human bias. Digital histology includes automation and improved diagnosis. It digitalizes microscopic images of histological samples and analyzes similar parameters. The present approach proposes the novel use of phase contrast in [...] Read more.
Histology is the diagnosis gold standard. Conventional biopsy presents artifacts, delays, or human bias. Digital histology includes automation and improved diagnosis. It digitalizes microscopic images of histological samples and analyzes similar parameters. The present approach proposes the novel use of phase contrast in clinical digital histology to improve diagnosis. The use of label-free fresh tissue slices prevents processing artifacts and reduces processing time. Phase contrast parameters are implemented and calculated: the external scale, the fractal dimension, the anisotropy factor, the scattering coefficient, and the refractive index variance. Images of healthy and tumoral samples of liver, colon, and kidney are employed. A total of 252 images with 10×, 20×, and 40× magnifications are measured. Discrimination significance between healthy and tumoral tissues is assessed statistically with ANOVA (p-value < 0.005). The analysis is made for each tissue type and for different magnifications. It shows a dependence on tissue type and image magnification. The p-value of the most significant parameters is below 10−5. Liver and colon tissues present a great overlap in significant phase contrast parameters. The 10× fractal dimension is significant for all tissue types under analysis. These results are promising for the use of phase contrast in digital histology clinical praxis. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Image Processing)
Show Figures

Figure 1

11 pages, 638 KiB  
Article
Texture-Based Analysis of 18F-Labeled Amyloid PET Brain Images
by Alexander P. Seiffert, Adolfo Gómez-Grande, Eva Milara, Sara Llamas-Velasco, Alberto Villarejo-Galende, Enrique J. Gómez and Patricia Sánchez-González
Appl. Sci. 2021, 11(5), 1991; https://0-doi-org.brum.beds.ac.uk/10.3390/app11051991 - 24 Feb 2021
Viewed by 1510
Abstract
Amyloid positron emission tomography (PET) brain imaging with radiotracers like [18F]florbetapir (FBP) or [18F]flutemetamol (FMM) is frequently used for the diagnosis of Alzheimer’s disease. Quantitative analysis is usually performed with standardized uptake value ratios (SUVR), which are calculated by [...] Read more.
Amyloid positron emission tomography (PET) brain imaging with radiotracers like [18F]florbetapir (FBP) or [18F]flutemetamol (FMM) is frequently used for the diagnosis of Alzheimer’s disease. Quantitative analysis is usually performed with standardized uptake value ratios (SUVR), which are calculated by normalizing to a reference region. However, the reference region could present high variability in longitudinal studies. Texture features based on the grey-level co-occurrence matrix, also called Haralick features (HF), are evaluated in this study to discriminate between amyloid-positive and negative cases. A retrospective study cohort of 66 patients with amyloid PET images (30 [18F]FBP and 36 [18F]FMM) was selected and SUVRs and 6 HFs were extracted from 13 cortical volumes of interest. Mann–Whitney U-tests were performed to analyze differences of the features between amyloid positive and negative cases. Receiver operating characteristic (ROC) curves were computed and their area under the curve (AUC) was calculated to study the discriminatory capability of the features. SUVR proved to be the most significant feature among all tests with AUCs between 0.692 and 0.989. All HFs except correlation also showed good performance. AUCs of up to 0.949 were obtained with the HFs. These results suggest the potential use of texture features for the classification of amyloid PET images. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Image Processing)
Show Figures

Figure 1

19 pages, 3204 KiB  
Article
Evaluating the Effect of Intensity Standardisation on Longitudinal Whole Brain Atrophy Quantification in Brain Magnetic Resonance Imaging
by Emily E. Carvajal-Camelo, Jose Bernal, Arnau Oliver, Xavier Lladó, María Trujillo and The Alzheimer’s Disease Neuroimaging Initiative
Appl. Sci. 2021, 11(4), 1773; https://0-doi-org.brum.beds.ac.uk/10.3390/app11041773 - 17 Feb 2021
Cited by 1 | Viewed by 2823
Abstract
Atrophy quantification is fundamental for understanding brain development and diagnosing and monitoring brain diseases. FSL-SIENA is a well-known fully automated method that has been widely used in brain magnetic resonance imaging studies. However, intensity variations arising during image acquisition may compromise evaluation, analysis [...] Read more.
Atrophy quantification is fundamental for understanding brain development and diagnosing and monitoring brain diseases. FSL-SIENA is a well-known fully automated method that has been widely used in brain magnetic resonance imaging studies. However, intensity variations arising during image acquisition may compromise evaluation, analysis and even diagnosis. In this work, we studied whether intensity standardisation could improve longitudinal atrophy quantification using FSL-SIENA. We evaluated the effect of six intensity standardisation methods—z-score, fuzzy c-means, Gaussian mixture model, kernel density estimation, histogram matching and WhiteStripe—on atrophy detected by FSL-SIENA. First, we evaluated scan–rescan repeatability using scans taken during the same session from OASIS (n=122). Except for WhiteStripe, intensity standardisation did not compromise the scan–rescan repeatability of FSL-SIENA. Second, we compared the mean annual atrophy for Alzheimer’s and control subjects from OASIS (n=122) and ADNI (n=147) yielded by FSL-SIENA with and without intensity standardisation, after adjusting for covariates. Our findings were threefold: First, the use of histogram matching was counterproductive, primarily as its assumption of equal tissue proportions does not necessarily hold in longitudinal studies. Second, standardising with z-score and WhiteStripe before registration affected the registration performance, thus leading to erroneous estimates. Third, z-score was the only method that consistently led to increased effect sizes compared to when omitted (no standardisation: 0.39 and 0.43 for OASIS and ADNI; z-score: 0.45 for both datasets). Overall, we found that incorporating z-score right after registration led to reduced inter-subject inter-scan intensity variability and benefited FSL-SIENA. Our work evinces the relevance of appropriate intensity standardisation in longitudinal cerebral atrophy assessments using FSL-SIENA. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Image Processing)
Show Figures

Figure 1

15 pages, 8437 KiB  
Article
A Phantom Study to Investigate Robustness and Reproducibility of Grey Level Co-Occurrence Matrix (GLCM)-Based Radiomics Features for PET
by Mahbubunnabi Tamal
Appl. Sci. 2021, 11(2), 535; https://0-doi-org.brum.beds.ac.uk/10.3390/app11020535 - 07 Jan 2021
Cited by 6 | Viewed by 2049
Abstract
Quantification and classification of heterogeneous radiotracer uptake in Positron Emission Tomography (PET) using textural features (termed as radiomics) and artificial intelligence (AI) has the potential to be used as a biomarker of diagnosis and prognosis. However, textural features have been predicted to be [...] Read more.
Quantification and classification of heterogeneous radiotracer uptake in Positron Emission Tomography (PET) using textural features (termed as radiomics) and artificial intelligence (AI) has the potential to be used as a biomarker of diagnosis and prognosis. However, textural features have been predicted to be strongly correlated with volume, segmentation and quantization, while the impact of image contrast and noise has not been assessed systematically. Further continuous investigations are required to update the existing standardization initiatives. This study aimed to investigate the relationships between textural features and these factors with 18F filled torso NEMA phantom to yield different contrasts and reconstructed with different durations to represent varying levels of noise. The phantom was also scanned with heterogeneous spherical inserts fabricated with 3D printing technology. All spheres were delineated using: (1) the exact boundaries based on their known diameters; (2) 40% fixed; and (3) adaptive threshold. Six textural features were derived from the gray level co-occurrence matrix (GLCM) using different quantization levels. The results indicate that homogeneity and dissimilarity are the most suitable for measuring PET tumor heterogeneity with quantization 64 provided that the segmentation method is robust to noise and contrast variations. To use these textural features as prognostic biomarkers, changes in textural features between baseline and treatment scans should always be reported along with the changes in volumes. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Image Processing)
Show Figures

Figure 1

18 pages, 6347 KiB  
Article
Detection of Ki67 Hot-Spots of Invasive Breast Cancer Based on Convolutional Neural Networks Applied to Mutual Information of H&E and Ki67 Whole Slide Images
by Zaneta Swiderska-Chadaj, Jaime Gallego, Lucia Gonzalez-Lopez and Gloria Bueno
Appl. Sci. 2020, 10(21), 7761; https://0-doi-org.brum.beds.ac.uk/10.3390/app10217761 - 02 Nov 2020
Cited by 6 | Viewed by 5602
Abstract
Ki67 hot-spot detection and its evaluation in invasive breast cancer regions play a significant role in routine medical practice. The quantification of cellular proliferation assessed by Ki67 immunohistochemistry is an established prognostic and predictive biomarker that determines the choice of therapeutic protocols. In [...] Read more.
Ki67 hot-spot detection and its evaluation in invasive breast cancer regions play a significant role in routine medical practice. The quantification of cellular proliferation assessed by Ki67 immunohistochemistry is an established prognostic and predictive biomarker that determines the choice of therapeutic protocols. In this paper, we present three deep learning-based approaches to automatically detect and quantify Ki67 hot-spot areas by means of the Ki67 labeling index. To this end, a dataset composed of 100 whole slide images (WSIs) belonging to 50 breast cancer cases (Ki67 and H&E WSI pairs) was used. Three methods based on CNN classification were proposed and compared to create the tumor proliferation map. The best results were obtained by applying the CNN to the mutual information acquired from the color deconvolution of both the Ki67 marker and the H&E WSIs. The overall accuracy of this approach was 95%. The agreement between the automatic Ki67 scoring and the manual analysis is promising with a Spearman’s ρ correlation of 0.92. The results illustrate the suitability of this CNN-based approach for detecting hot-spots areas of invasive breast cancer in WSI. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Image Processing)
Show Figures

Figure 1

14 pages, 5645 KiB  
Article
Disentangled Autoencoder for Cross-Stain Feature Extraction in Pathology Image Analysis
by Helge Hecht, Mhd Hasan Sarhan and Vlad Popovici
Appl. Sci. 2020, 10(18), 6427; https://0-doi-org.brum.beds.ac.uk/10.3390/app10186427 - 15 Sep 2020
Cited by 7 | Viewed by 3809
Abstract
A novel deep autoencoder architecture is proposed for the analysis of histopathology images. Its purpose is to produce a disentangled latent representation in which the structure and colour information are confined to different subspaces so that stain-independent models may be learned. For this, [...] Read more.
A novel deep autoencoder architecture is proposed for the analysis of histopathology images. Its purpose is to produce a disentangled latent representation in which the structure and colour information are confined to different subspaces so that stain-independent models may be learned. For this, we introduce two constraints on the representation which are implemented as a classifier and an adversarial discriminator. We show how they can be used for learning a latent representation across haematoxylin-eosin and a number of immune stains. Finally, we demonstrate the utility of the proposed representation in the context of matching image patches for registration applications and for learning a bag of visual words for whole slide image summarization. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Image Processing)
Show Figures

Figure 1

Review

Jump to: Research, Other

27 pages, 5668 KiB  
Review
Imaging Techniques for the Assessment of the Bone Osteoporosis-Induced Variations with Particular Focus on Micro-CT Potential
by Giulia Molino, Giorgia Montalbano, Carlotta Pontremoli, Sonia Fiorilli and Chiara Vitale-Brovarone
Appl. Sci. 2020, 10(24), 8939; https://0-doi-org.brum.beds.ac.uk/10.3390/app10248939 - 15 Dec 2020
Cited by 5 | Viewed by 4597
Abstract
For long time, osteoporosis (OP) was exclusively associated with an overall bone mass reduction, leading to lower bone strength and to a higher fracture risk. For this reason, the measurement of bone mineral density through dual X-ray absorptiometry was considered the gold standard [...] Read more.
For long time, osteoporosis (OP) was exclusively associated with an overall bone mass reduction, leading to lower bone strength and to a higher fracture risk. For this reason, the measurement of bone mineral density through dual X-ray absorptiometry was considered the gold standard method for its diagnosis. However, recent findings suggest that OP causes a more complex set of bone alterations, involving both its microstructure and composition. This review aims to provide an overview of the most evident osteoporosis-induced alterations of bone quality and a résumé of the most common imaging techniques used for their assessment, at both the clinical and the laboratory scale. A particular focus is dedicated to the micro-computed tomography (micro-CT) due to its superior image resolution, allowing the execution of more accurate morphometric analyses, better highlighting the architectural alterations of the osteoporotic bone. In addition, micro-CT has the potential to perform densitometric measurements and finite element method analyses at the microscale, representing potential tools for OP diagnosis and for fracture risk prediction. Unfortunately, technological improvements are still necessary to reduce the radiation dose and the scanning duration, parameters that currently limit the application of micro-CT in clinics for OP diagnosis, despite its revolutionary potential. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Image Processing)
Show Figures

Graphical abstract

Other

Jump to: Research, Review

16 pages, 758 KiB  
Systematic Review
MRI and CT Fusion in Stereotactic Electroencephalography: A Literature Review
by Jaime Perez, Claudia Mazo, Maria Trujillo and Alejandro Herrera
Appl. Sci. 2021, 11(12), 5524; https://0-doi-org.brum.beds.ac.uk/10.3390/app11125524 - 15 Jun 2021
Cited by 7 | Viewed by 3657
Abstract
Epilepsy is a common neurological disease characterized by spontaneous recurrent seizures. Resection of the epileptogenic tissue may be needed in approximately 25% of all cases due to ineffective treatment with anti-epileptic drugs. The surgical intervention depends on the correct detection of epileptogenic zones. [...] Read more.
Epilepsy is a common neurological disease characterized by spontaneous recurrent seizures. Resection of the epileptogenic tissue may be needed in approximately 25% of all cases due to ineffective treatment with anti-epileptic drugs. The surgical intervention depends on the correct detection of epileptogenic zones. The detection relies on invasive diagnostic techniques such as Stereotactic Electroencephalography (SEEG), which uses multi-modal fusion to aid localizing electrodes, using pre-surgical magnetic resonance and intra-surgical computer tomography as the input images. Moreover, it is essential to know how to measure the performance of fusion methods in the presence of external objects, such as electrodes. In this paper, a literature review is presented, applying the methodology proposed by Kitchenham to determine the main techniques of multi-modal brain image fusion, the most relevant performance metrics, and the main fusion tools. The search was conducted using the databases and search engines of Scopus, IEEE, PubMed, Springer, and Google Scholar, resulting in 15 primary source articles. The literature review found that rigid registration was the most used technique when electrode localization in SEEG is required, which was the proposed method in nine of the found articles. However, there is a lack of standard validation metrics, which makes the performance measurement difficult when external objects are presented, caused primarily by the absence of a gold-standard dataset for comparison. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Image Processing)
Show Figures

Figure 1

Back to TopTop