Artificial Intelligence in Biological and Biomedical Imaging

A special issue of Biomedicines (ISSN 2227-9059). This special issue belongs to the section "Biomedical Engineering and Materials".

Deadline for manuscript submissions: closed (30 April 2022) | Viewed by 53781

Special Issue Editors

Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei, Taiwan
Interests: medical image processing and analysis; structural and functional magnetic resonance imaging; Artificial Intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
Interests: neuroradiology; magnetic resonance imaging; fetal magnetic resonance imaging; brain tumor imaging; cerebrovascular disease diagnosis & treatment
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to invite submissions to a new Special Issue on “Artificial Intelligence (AI) in Biological and Biomedical Imaging”. In recent years, AI, including machine learning and deep learning, has been widely investigated to automate the time-consuming process of labor-intensive work and to assist in diagnosis or prognosis. The application of AI in biological and biomedical imaging is an emerging research topic and future trend.

Imaging plays an essential role in the fields of biology and biomedicine. It provides information about the structural and functional mechanisms of cells and the human body. Biological and biomedical imaging covers microscopy, molecular imaging, pathological imaging, optical coherence tomography, nuclear medicine, ultrasound imaging, X-ray radiography, computed tomography, magnetic resonance imaging, etc. AI has been applied in biological and biomedical imaging research to address imaging reconstruction, registration, detection, classification, lesion segmentation, diagnosis, prognosis, and so on. The purpose of this Special Issue is to provide diverse and up-to-date contributions of AI research in this field.

This Special Issue is open to basic to clinical research or multidisciplinary research on AI in the field of biological and biomedical imaging. It covers original articles that include but are not limited to the following topics:

  • AI-based biological and biomedical image reconstruction;
  • AI-based biological and biomedical image registration;
  • AI-based biological and biomedical image classification;
  • AI-based biological and biomedical image detection and segmentation;
  • AI-based biological and biomedical applications;
  • AI-aided diagnosis;
  • AI-aided prognosis;
  • AI-aided decision making.

Dr. Yu-Te Wu
Dr. Wan-Yuo Guo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Biomedicines is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • biological imaging
  • biomedical imaging
  • image analysis and processing
  • diagnosis
  • prognosis
  • decision-making

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

16 pages, 11579 KiB  
Article
Artificial Intelligence for Cardiac Diseases Diagnosis and Prediction Using ECG Images on Embedded Systems
by Lotfi Mhamdi, Oussama Dammak, François Cottin and Imed Ben Dhaou
Biomedicines 2022, 10(8), 2013; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10082013 - 19 Aug 2022
Cited by 17 | Viewed by 3224
Abstract
The electrocardiogram (ECG) provides essential information about various human cardiac conditions. Several studies have investigated this topic in order to detect cardiac abnormalities for prevention purposes. Nowadays, there is an expansion of new smart signal processing methods, such as machine learning and its [...] Read more.
The electrocardiogram (ECG) provides essential information about various human cardiac conditions. Several studies have investigated this topic in order to detect cardiac abnormalities for prevention purposes. Nowadays, there is an expansion of new smart signal processing methods, such as machine learning and its sub-branches, such as deep learning. These popular techniques help analyze and classify the ECG signal in an efficient way. Our study aims to develop algorithmic models to analyze ECG tracings to predict cardiovascular diseases. The direct impact of this work is to save lives and improve medical care with less expense. As health care and health insurance costs increase in the world, the direct impact of this work is saving lives and improving medical care. We conducted numerous experiments to optimize deep-learning parameters. We found the same validation accuracy value of about 0.95 for both MobileNetV2 and VGG16 algorithms. After implementation on Raspberry Pi, our results showed a small decrease in accuracy (0.94 and 0.90 for MobileNetV2 and VGG16 algorithms, respectively). Therefore, the main purpose of the present research work is to improve, in an easy and cheaper way, real-time monitoring using smart mobile tools (mobile phones, smart watches, connected T-shirts, etc.). Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

15 pages, 2728 KiB  
Article
Deep Learning-Based Computer-Aided Diagnosis of Rheumatoid Arthritis with Hand X-ray Images Conforming to Modified Total Sharp/van der Heijde Score
by Hao-Jan Wang, Chi-Ping Su, Chien-Chih Lai, Wun-Rong Chen, Chi Chen, Liang-Ying Ho, Woei-Chyn Chu and Chung-Yueh Lien
Biomedicines 2022, 10(6), 1355; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10061355 - 08 Jun 2022
Cited by 15 | Viewed by 3158
Abstract
Introduction: Rheumatoid arthritis (RA) is a systemic autoimmune disease; early diagnosis and treatment are crucial for its management. Currently, the modified total Sharp score (mTSS) is widely used as a scoring system for RA. The standard screening process for assessing mTSS is tedious [...] Read more.
Introduction: Rheumatoid arthritis (RA) is a systemic autoimmune disease; early diagnosis and treatment are crucial for its management. Currently, the modified total Sharp score (mTSS) is widely used as a scoring system for RA. The standard screening process for assessing mTSS is tedious and time-consuming. Therefore, developing an efficient mTSS automatic localization and classification system is of urgent need for RA diagnosis. Current research mostly focuses on the classification of finger joints. Due to the insufficient detection ability of the carpal part, these methods cannot cover all the diagnostic needs of mTSS. Method: We propose not only an automatic label system leveraging the You Only Look Once (YOLO) model to detect the regions of joints of the two hands in hand X-ray images for preprocessing of joint space narrowing in mTSS, but also a joint classification model depending on the severity of the mTSS-based disease. In the image processing of the data, the window level is used to simulate the processing method of the clinician, the training data of the different carpal and finger bones of human vision are separated and integrated, and the resolution is increased or decreased to observe the changes in the accuracy of the model. Results: Integrated data proved to be beneficial. The mean average precision of the proposed model in joint detection of joint space narrowing reached 0.92, and the precision, recall, and F1 score all reached 0.94 to 0.95. For the joint classification, the average accuracy was 0.88, and the accuracy of severe, mild, and healthy reached 0.91, 0.79, and 0.9, respectively. Conclusions: The proposed model is feasible and efficient. It could be helpful for subsequent research on computer-aided diagnosis in RA. We suggest that applying the one-hand X-ray imaging protocol can improve the accuracy of mTSS classification model in determining mild disease if it is used in clinical practice. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

12 pages, 1803 KiB  
Article
Widen the Applicability of a Convolutional Neural-Network-Assisted Glaucoma Detection Algorithm of Limited Training Images across Different Datasets
by Yu-Chieh Ko, Wei-Shiang Chen, Hung-Hsun Chen, Tsui-Kang Hsu, Ying-Chi Chen, Catherine Jui-Ling Liu and Henry Horng-Shing Lu
Biomedicines 2022, 10(6), 1314; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10061314 - 03 Jun 2022
Cited by 5 | Viewed by 1999
Abstract
Automated glaucoma detection using deep learning may increase the diagnostic rate of glaucoma to prevent blindness, but generalizable models are currently unavailable despite the use of huge training datasets. This study aims to evaluate the performance of a convolutional neural network (CNN) classifier [...] Read more.
Automated glaucoma detection using deep learning may increase the diagnostic rate of glaucoma to prevent blindness, but generalizable models are currently unavailable despite the use of huge training datasets. This study aims to evaluate the performance of a convolutional neural network (CNN) classifier trained with a limited number of high-quality fundus images in detecting glaucoma and methods to improve its performance across different datasets. A CNN classifier was constructed using EfficientNet B3 and 944 images collected from one medical center (core model) and externally validated using three datasets. The performance of the core model was compared with (1) the integrated model constructed by using all training images from the four datasets and (2) the dataset-specific model built by fine-tuning the core model with training images from the external datasets. The diagnostic accuracy of the core model was 95.62% but dropped to ranges of 52.5–80.0% on the external datasets. Dataset-specific models exhibited superior diagnostic performance on the external datasets compared to other models, with a diagnostic accuracy of 87.50–92.5%. The findings suggest that dataset-specific tuning of the core CNN classifier effectively improves its applicability across different datasets when increasing training images fails to achieve generalization. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

12 pages, 2668 KiB  
Article
Automated Endocardial Border Detection and Left Ventricular Functional Assessment in Echocardiography Using Deep Learning
by Shunzaburo Ono, Masaaki Komatsu, Akira Sakai, Hideki Arima, Mie Ochida, Rina Aoyama, Suguru Yasutomi, Ken Asada, Syuzo Kaneko, Tetsuo Sasano and Ryuji Hamamoto
Biomedicines 2022, 10(5), 1082; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10051082 - 06 May 2022
Cited by 8 | Viewed by 2664
Abstract
Endocardial border detection is a key step in assessing left ventricular systolic function in echocardiography. However, this process is still not sufficiently accurate, and manual retracing is often required, causing time-consuming and intra-/inter-observer variability in clinical practice. To address these clinical issues, more [...] Read more.
Endocardial border detection is a key step in assessing left ventricular systolic function in echocardiography. However, this process is still not sufficiently accurate, and manual retracing is often required, causing time-consuming and intra-/inter-observer variability in clinical practice. To address these clinical issues, more accurate and normalized automatic endocardial border detection would be valuable. Here, we develop a deep learning-based method for automated endocardial border detection and left ventricular functional assessment in two-dimensional echocardiographic videos. First, segmentation of the left ventricular cavity was performed in the six representative projections for a cardiac cycle. We employed four segmentation methods: U-Net, UNet++, UNet3+, and Deep Residual U-Net. UNet++ and UNet3+ showed a sufficiently high performance in the mean value of intersection over union and Dice coefficient. The accuracy of the four segmentation methods was then evaluated by calculating the mean value for the estimation error of the echocardiographic indexes. UNet++ was superior to the other segmentation methods, with the acceptable mean estimation error of the left ventricular ejection fraction of 10.8%, global longitudinal strain of 8.5%, and global circumferential strain of 5.8%, respectively. Our method using UNet++ demonstrated the best performance. This method may potentially support examiners and improve the workflow in echocardiography. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

11 pages, 4246 KiB  
Article
Temporal and Locational Values of Images Affecting the Deep Learning of Cancer Stem Cell Morphology
by Yumi Hanai, Hiroaki Ishihata, Zaijun Zhang, Ryuto Maruyama, Tomonari Kasai, Hiroyuki Kameda and Tomoyasu Sugiyama
Biomedicines 2022, 10(5), 941; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10050941 - 19 Apr 2022
Cited by 3 | Viewed by 2156
Abstract
Deep learning is being increasingly applied for obtaining digital microscopy image data of cells. Well-defined annotated cell images have contributed to the development of the technology. Cell morphology is an inherent characteristic of each cell type. Moreover, the morphology of a cell changes [...] Read more.
Deep learning is being increasingly applied for obtaining digital microscopy image data of cells. Well-defined annotated cell images have contributed to the development of the technology. Cell morphology is an inherent characteristic of each cell type. Moreover, the morphology of a cell changes during its lifetime because of cellular activity. Artificial intelligence (AI) capable of recognizing a mouse-induced pluripotent stem (miPS) cell cultured in a medium containing Lewis lung cancer (LLC) cell culture-conditioned medium (cm), miPS-LLCcm cell, which is a cancer stem cell (CSC) derived from miPS cell, would be suitable for basic and applied science. This study aims to clarify the limitation of AI models constructed using different datasets and the versatility improvement of AI models. The trained AI was used to segment CSC in phase-contrast images using conditional generative adversarial networks (CGAN). The dataset included blank cell images that were used for training the AI but they did not affect the quality of predicting CSC in phase contrast images compared with the dataset without the blank cell images. AI models trained using images of 1-day culture could predict CSC in images of 2-day culture; however, the quality of the CSC prediction was reduced. Convolutional neural network (CNN) classification indicated that miPS-LLCcm cell image classification was done based on cultivation day. By using a dataset that included images of each cell culture day, the prediction of CSC remains to be improved. This is useful because cells do not change the characteristics of stem cells owing to stem cell marker expression, even if the cell morphology changes during culture. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

12 pages, 1382 KiB  
Article
Radiomics-Based Predictive Model of Radiation-Induced Liver Disease in Hepatocellular Carcinoma Patients Receiving Stereo-Tactic Body Radiotherapy
by Po-Chien Shen, Wen-Yen Huang, Yang-Hong Dai, Cheng-Hsiang Lo, Jen-Fu Yang, Yu-Fu Su, Ying-Fu Wang, Chia-Feng Lu and Chun-Shu Lin
Biomedicines 2022, 10(3), 597; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10030597 - 03 Mar 2022
Cited by 3 | Viewed by 2279
Abstract
(1) Background: The application of stereotactic body radiation therapy (SBRT) in hepatocellular carcinoma (HCC) limited the risk of the radiation-induced liver disease (RILD) and we aimed to predict the occurrence of RILD more accurately. (2) Methods: 86 HCC patients were enrolled. We identified [...] Read more.
(1) Background: The application of stereotactic body radiation therapy (SBRT) in hepatocellular carcinoma (HCC) limited the risk of the radiation-induced liver disease (RILD) and we aimed to predict the occurrence of RILD more accurately. (2) Methods: 86 HCC patients were enrolled. We identified key predictive factors from clinical, radiomic, and dose-volumetric parameters using a multivariate analysis, sequential forward selection (SFS), and a K-nearest neighbor (KNN) algorithm. We developed a predictive model for RILD based on these factors, using the random forest or logistic regression algorithms. (3) Results: Five key predictive factors in the training set were identified, including the albumin–bilirubin grade, difference average, strength, V5, and V30. After model training, the F1 score, sensitivity, specificity, and accuracy of the final random forest model were 0.857, 100, 93.3, and 94.4% in the test set, respectively. Meanwhile, the logistic regression model yielded an F1 score, sensitivity, specificity, and accuracy of 0.8, 66.7, 100, and 94.4% in the test set, respectively. (4) Conclusions: Based on clinical, radiomic, and dose-volumetric factors, our models achieved satisfactory performance on the prediction of the occurrence of SBRT-related RILD in HCC patients. Before undergoing SBRT, the proposed models may detect patients at high risk of RILD, allowing to assist in treatment strategies accordingly. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

12 pages, 1892 KiB  
Article
Effects of Artificial Intelligence-Derived Body Composition on Kidney Graft and Patient Survival in the Eurotransplant Senior Program
by Nick Lasse Beetz, Dominik Geisel, Seyd Shnayien, Timo Alexander Auer, Brigitta Globke, Robert Öllinger, Tobias Daniel Trippel, Thomas Schachtner and Uli Fehrenbach
Biomedicines 2022, 10(3), 554; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10030554 - 26 Feb 2022
Cited by 3 | Viewed by 1967
Abstract
The Eurotransplant Senior Program allocates kidneys to elderly transplant patients. The aim of this retrospective study is to investigate the use of computed tomography (CT) body composition using artificial intelligence (AI)-based tissue segmentation to predict patient and kidney transplant survival. Body composition at [...] Read more.
The Eurotransplant Senior Program allocates kidneys to elderly transplant patients. The aim of this retrospective study is to investigate the use of computed tomography (CT) body composition using artificial intelligence (AI)-based tissue segmentation to predict patient and kidney transplant survival. Body composition at the third lumbar vertebra level was analyzed in 42 kidney transplant recipients. Cox regression analysis of 1-year, 3-year and 5-year patient survival, 1-year, 3-year and 5-year censored kidney transplant survival, and 1-year, 3-year and 5-year uncensored kidney transplant survival was performed. First, the body mass index (BMI), psoas muscle index (PMI), skeletal muscle index (SMI), visceral adipose tissue (VAT), and subcutaneous adipose tissue (SAT) served as independent variates. Second, the cut-off values for sarcopenia and obesity served as independent variates. The 1-year uncensored and censored kidney transplant survival was influenced by reduced PMI (p = 0.02 and p = 0.03, respectively) and reduced SMI (p = 0.01 and p = 0.03, respectively); 3-year uncensored kidney transplant survival was influenced by increased VAT (p = 0.04); and 3-year censored kidney transplant survival was influenced by reduced SMI (p = 0.05). Additionally, sarcopenia influenced 1-year uncensored kidney transplant survival (p = 0.05), whereas obesity influenced 3-year and 5-year uncensored kidney transplant survival. In summary, AI-based body composition analysis may aid in predicting short- and long-term kidney transplant survival. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

21 pages, 2735 KiB  
Article
Medical Professional Enhancement Using Explainable Artificial Intelligence in Fetal Cardiac Ultrasound Screening
by Akira Sakai, Masaaki Komatsu, Reina Komatsu, Ryu Matsuoka, Suguru Yasutomi, Ai Dozen, Kanto Shozu, Tatsuya Arakaki, Hidenori Machino, Ken Asada, Syuzo Kaneko, Akihiko Sekizawa and Ryuji Hamamoto
Biomedicines 2022, 10(3), 551; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10030551 - 25 Feb 2022
Cited by 17 | Viewed by 10832
Abstract
Diagnostic support tools based on artificial intelligence (AI) have exhibited high performance in various medical fields. However, their clinical application remains challenging because of the lack of explanatory power in AI decisions (black box problem), making it difficult to build trust with medical [...] Read more.
Diagnostic support tools based on artificial intelligence (AI) have exhibited high performance in various medical fields. However, their clinical application remains challenging because of the lack of explanatory power in AI decisions (black box problem), making it difficult to build trust with medical professionals. Nevertheless, visualizing the internal representation of deep neural networks will increase explanatory power and improve the confidence of medical professionals in AI decisions. We propose a novel deep learning-based explainable representation “graph chart diagram” to support fetal cardiac ultrasound screening, which has low detection rates of congenital heart diseases due to the difficulty in mastering the technique. Screening performance improves using this representation from 0.966 to 0.975 for experts, 0.829 to 0.890 for fellows, and 0.616 to 0.748 for residents in the arithmetic mean of area under the curve of a receiver operating characteristic curve. This is the first demonstration wherein examiners used deep learning-based explainable representation to improve the performance of fetal cardiac ultrasound screening, highlighting the potential of explainable AI to augment examiner capabilities. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

20 pages, 1043 KiB  
Article
Hyperspectral Imaging during Normothermic Machine Perfusion—A Functional Classification of Ex Vivo Kidneys Based on Convolutional Neural Networks
by Florian Sommer, Bingrui Sun, Julian Fischer, Miriam Goldammer, Christine Thiele, Hagen Malberg and Wenke Markgraf
Biomedicines 2022, 10(2), 397; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10020397 - 07 Feb 2022
Cited by 6 | Viewed by 1713
Abstract
Facing an ongoing organ shortage in transplant medicine, strategies to increase the use of organs from marginal donors by objective organ assessment are being fostered. In this context, normothermic machine perfusion provides a platform for ex vivo organ evaluation during preservation. Consequently, analytical [...] Read more.
Facing an ongoing organ shortage in transplant medicine, strategies to increase the use of organs from marginal donors by objective organ assessment are being fostered. In this context, normothermic machine perfusion provides a platform for ex vivo organ evaluation during preservation. Consequently, analytical tools are emerging to determine organ quality. In this study, hyperspectral imaging (HSI) in the wavelength range of 550–995 nm was applied. Classification of 26 kidneys based on HSI was established using KidneyResNet, a convolutional neural network (CNN) based on the ResNet-18 architecture, to predict inulin clearance behavior. HSI preprocessing steps were implemented, including automated region of interest (ROI) selection, before executing the KidneyResNet algorithm. Training parameters and augmentation methods were investigated concerning their influence on the prediction. When classifying individual ROIs, the optimized KidneyResNet model achieved 84% and 62% accuracy in the validation and test set, respectively. With a majority decision on all ROIs of a kidney, the accuracy increased to 96% (validation set) and 100% (test set). These results demonstrate the feasibility of HSI in combination with KidneyResNet for non-invasive prediction of ex vivo kidney function. This knowledge of preoperative renal quality may support the organ acceptance decision. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

19 pages, 3372 KiB  
Article
Brain Tumor Classification Using a Combination of Variational Autoencoders and Generative Adversarial Networks
by Bilal Ahmad, Jun Sun, Qi You, Vasile Palade and Zhongjie Mao
Biomedicines 2022, 10(2), 223; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10020223 - 21 Jan 2022
Cited by 32 | Viewed by 5467
Abstract
Brain tumors are a pernicious cancer with one of the lowest five-year survival rates. Neurologists often use magnetic resonance imaging (MRI) to diagnose the type of brain tumor. Automated computer-assisted tools can help them speed up the diagnosis process and reduce the burden [...] Read more.
Brain tumors are a pernicious cancer with one of the lowest five-year survival rates. Neurologists often use magnetic resonance imaging (MRI) to diagnose the type of brain tumor. Automated computer-assisted tools can help them speed up the diagnosis process and reduce the burden on the health care systems. Recent advances in deep learning for medical imaging have shown remarkable results, especially in the automatic and instant diagnosis of various cancers. However, we need a large amount of data (images) to train the deep learning models in order to obtain good results. Large public datasets are rare in medicine. This paper proposes a framework based on unsupervised deep generative neural networks to solve this limitation. We combine two generative models in the proposed framework: variational autoencoders (VAEs) and generative adversarial networks (GANs). We swap the encoder–decoder network after initially training it on the training set of available MR images. The output of this swapped network is a noise vector that has information of the image manifold, and the cascaded generative adversarial network samples the input from this informative noise vector instead of random Gaussian noise. The proposed method helps the GAN to avoid mode collapse and generate realistic-looking brain tumor magnetic resonance images. These artificially generated images could solve the limitation of small medical datasets up to a reasonable extent and help the deep learning models perform acceptably. We used the ResNet50 as a classifier, and the artificially generated brain tumor images are used to augment the real and available images during the classifier training. We compared the classification results with several existing studies and state-of-the-art machine learning models. Our proposed methodology noticeably achieved better results. By using brain tumor images generated artificially by our proposed method, the classification average accuracy improved from 72.63% to 96.25%. For the most severe class of brain tumor, glioma, we achieved 0.769, 0.837, 0.833, and 0.80 values for recall, specificity, precision, and F1-score, respectively. The proposed generative model framework could be used to generate medical images in any domain, including PET (positron emission tomography) and MRI scans of various parts of the body, and the results show that it could be a useful clinical tool for medical experts. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

19 pages, 3301 KiB  
Article
Predictive Factors for Neutralizing Antibody Levels Nine Months after Full Vaccination with BNT162b2: Results of a Machine Learning Analysis
by Dimitris Papadopoulos, Ioannis Ntanasis-Stathopoulos, Maria Gavriatopoulou, Zoi Evangelakou, Panagiotis Malandrakis, Maria S. Manola, Despoina D. Gianniou, Efstathios Kastritis, Ioannis P. Trougakos, Meletios A. Dimopoulos, Vangelis Karalis and Evangelos Terpos
Biomedicines 2022, 10(2), 204; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10020204 - 18 Jan 2022
Cited by 5 | Viewed by 2013
Abstract
Vaccination against SARS-CoV-2 with BNT162b2 mRNA vaccine plays a critical role in COVID-19 prevention. Although BNT162b2 is highly effective against COVID-19, a time-dependent decrease in neutralizing antibodies (NAbs) is observed. The aim of this study was to identify the individual features that may [...] Read more.
Vaccination against SARS-CoV-2 with BNT162b2 mRNA vaccine plays a critical role in COVID-19 prevention. Although BNT162b2 is highly effective against COVID-19, a time-dependent decrease in neutralizing antibodies (NAbs) is observed. The aim of this study was to identify the individual features that may predict NAbs levels after vaccination. Machine learning techniques were applied to data from 302 subjects. Principal component analysis (PCA), factor analysis of mixed data (FAMD), k-means clustering, and random forest were used. PCA and FAMD showed that younger subjects had higher levels of neutralizing antibodies than older subjects. The effect of age is strongest near the vaccination date and appears to decrease with time. Obesity was associated with lower antibody response. Gender had no effect on NAbs at nine months, but there was a modest association at earlier time points. Participants with autoimmune disease had lower inhibitory levels than participants without autoimmune disease. K-Means clustering showed the natural grouping of subjects into five categories in which the characteristics of some individuals predominated. Random forest allowed the characteristics to be ordered by importance. Older age, higher body mass index, and the presence of autoimmune diseases had negative effects on the development of NAbs against SARS-CoV-2, nine months after full vaccination. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

14 pages, 23697 KiB  
Article
Automated Cerebral Infarct Detection on Computed Tomography Images Based on Deep Learning
by Syu-Jyun Peng, Yu-Wei Chen, Jing-Yu Yang, Kuo-Wei Wang and Jang-Zern Tsai
Biomedicines 2022, 10(1), 122; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10010122 - 06 Jan 2022
Cited by 7 | Viewed by 2123
Abstract
The limited accuracy of cerebral infarct detection on CT images caused by the low contrast of CT hinders the desirable application of CT as a first-line diagnostic modality for screening of cerebral infarct. This research was aimed at utilizing convolutional neural network to [...] Read more.
The limited accuracy of cerebral infarct detection on CT images caused by the low contrast of CT hinders the desirable application of CT as a first-line diagnostic modality for screening of cerebral infarct. This research was aimed at utilizing convolutional neural network to enhance the accuracy of automated cerebral infarct detection on CT images. The CT images underwent a series of preprocessing steps mainly to enhance the contrast inside the parenchyma, adjust the orientation, spatially normalize the images to the CT template, and create a t-score map for each patient. The input format of the convolutional neural network was the t-score matrix of a 16 × 16-pixel patch. Non-infarcted and infarcted patches were selected from the t-score maps, on which data augmentation was conducted to generate more patches for training and testing the proposed convolutional neural network. The convolutional neural network attained a 93.9% patch-wise detection accuracy in the test set. The proposed method offers prompt and accurate cerebral infarct detection on CT images. It renders a frontline detection modality of ischemic stroke on an emergent or regular basis. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

20 pages, 4569 KiB  
Article
Detection of Cytopathic Effects Induced by Influenza, Parainfluenza, and Enterovirus Using Deep Convolution Neural Network
by Jen-Jee Chen, Po-Han Lin, Yi-Ying Lin, Kun-Yi Pu, Chu-Feng Wang, Shang-Yi Lin and Tzung-Shi Chen
Biomedicines 2022, 10(1), 70; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10010070 - 30 Dec 2021
Cited by 1 | Viewed by 2651
Abstract
The isolation of a virus using cell culture to observe its cytopathic effects (CPEs) is the main method for identifying the viruses in clinical specimens. However, the observation of CPEs requires experienced inspectors and excessive time to inspect the cell morphology changes. In [...] Read more.
The isolation of a virus using cell culture to observe its cytopathic effects (CPEs) is the main method for identifying the viruses in clinical specimens. However, the observation of CPEs requires experienced inspectors and excessive time to inspect the cell morphology changes. In this study, we utilized artificial intelligence (AI) to improve the efficiency of virus identification. After some comparisons, we used ResNet-50 as a backbone with single and multi-task learning models to perform deep learning on the CPEs induced by influenza, enterovirus, and parainfluenza. The accuracies of the single and multi-task learning models were 97.78% and 98.25%, respectively. In addition, the multi-task learning model increased the accuracy of the single model from 95.79% to 97.13% when only a few data of the CPEs induced by parainfluenza were provided. We modified both models by inserting a multiplexer and de-multiplexer layer, respectively, to increase the correct rates for known cell lines. In conclusion, we provide a deep learning structure with ResNet-50 and the multi-task learning model and show an excellent performance in identifying virus-induced CPEs. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

11 pages, 3218 KiB  
Article
Displacement of Gray Matter and Incidence of Seizures in Patients with Cerebral Cavernous Malformations
by Chi-Jen Chou, Cheng-Chia Lee, Ching-Jen Chen, Huai-Che Yang and Syu-Jyun Peng
Biomedicines 2021, 9(12), 1872; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines9121872 - 10 Dec 2021
Cited by 3 | Viewed by 1970
Abstract
Seizures are the most common presentation in patients with cerebral cavernous malformations (CCMs). Based on the hypothesis that the volume or proportion of gray matter (GM) displaced by CCMs is associated with the risk of seizure, we developed an algorithm by which to [...] Read more.
Seizures are the most common presentation in patients with cerebral cavernous malformations (CCMs). Based on the hypothesis that the volume or proportion of gray matter (GM) displaced by CCMs is associated with the risk of seizure, we developed an algorithm by which to quantify the volume and proportion of displaced GM and the risk of seizure. Image analysis was conducted on 111 patients with solitary CCMs (divided into seizure and nonseizure groups) from our gamma knife radiosurgery (GKRS) database from February 2005 and March 2020. The CCM algorithm proved effective in quantifying the GM and CCM using T1WI MRI images. In the seizure group, 11 of the 12 patients exhibited seizures at the initial presentation, and all CCMs in the seizure group were supratentorial. The location of the limbic lobe within the CCM was significantly associated with the risk of seizure (OR = 19.6, p = 0.02). The risk of seizure increased when the proportion of GM displaced by the CCM exceeded 31%. It was also strongly correlated with the volume of displaced GM. The volume and proportion of displaced GM were both positively correlated with the risk of seizure presentation/development and thus could be used to guide seizure prophylaxis in CCM patients. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

14 pages, 2178 KiB  
Article
Using Deep Convolutional Neural Networks for Enhanced Ultrasonographic Image Diagnosis of Differentiated Thyroid Cancer
by Wai-Kin Chan, Jui-Hung Sun, Miaw-Jene Liou, Yan-Rong Li, Wei-Yu Chou, Feng-Hsuan Liu, Szu-Tah Chen and Syu-Jyun Peng
Biomedicines 2021, 9(12), 1771; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines9121771 - 26 Nov 2021
Cited by 11 | Viewed by 2807
Abstract
Differentiated thyroid cancer (DTC) from follicular epithelial cells is the most common form of thyroid cancer. Beyond the common papillary thyroid carcinoma (PTC), there are a number of rare but difficult-to-diagnose pathological classifications, such as follicular thyroid carcinoma (FTC). We employed deep convolutional [...] Read more.
Differentiated thyroid cancer (DTC) from follicular epithelial cells is the most common form of thyroid cancer. Beyond the common papillary thyroid carcinoma (PTC), there are a number of rare but difficult-to-diagnose pathological classifications, such as follicular thyroid carcinoma (FTC). We employed deep convolutional neural networks (CNNs) to facilitate the clinical diagnosis of differentiated thyroid cancers. An image dataset with thyroid ultrasound images of 421 DTCs and 391 benign patients was collected. Three CNNs (InceptionV3, ResNet101, and VGG19) were retrained and tested after undergoing transfer learning to classify malignant and benign thyroid tumors. The enrolled cases were classified as PTC, FTC, follicular variant of PTC (FVPTC), Hürthle cell carcinoma (HCC), or benign. The accuracy of the CNNs was as follows: InceptionV3 (76.5%), ResNet101 (77.6%), and VGG19 (76.1%). The sensitivity was as follows: InceptionV3 (83.7%), ResNet101 (72.5%), and VGG19 (66.2%). The specificity was as follows: InceptionV3 (83.7%), ResNet101 (81.4%), and VGG19 (76.9%). The area under the curve was as follows: Incep-tionV3 (0.82), ResNet101 (0.83), and VGG19 (0.83). A comparison between performance of physicians and CNNs was assessed and showed significantly better outcomes in the latter. Our results demonstrate that retrained deep CNNs can enhance diagnostic accuracy in most DTCs, including follicular cancers. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

Other

Jump to: Research

20 pages, 42238 KiB  
Systematic Review
Reporting on the Value of Artificial Intelligence in Predicting the Optimal Embryo for Transfer: A Systematic Review including Data Synthesis
by Konstantinos Sfakianoudis, Evangelos Maziotis, Sokratis Grigoriadis, Agni Pantou, Georgia Kokkini, Anna Trypidi, Polina Giannelou, Athanasios Zikopoulos, Irene Angeli, Terpsithea Vaxevanoglou, Konstantinos Pantos and Mara Simopoulou
Biomedicines 2022, 10(3), 697; https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines10030697 - 17 Mar 2022
Cited by 11 | Viewed by 2508
Abstract
Artificial intelligence (AI) has been gaining support in the field of in vitro fertilization (IVF). Despite the promising existing data, AI cannot yet claim gold-standard status, which serves as the rationale for this study. This systematic review and data synthesis aims to evaluate [...] Read more.
Artificial intelligence (AI) has been gaining support in the field of in vitro fertilization (IVF). Despite the promising existing data, AI cannot yet claim gold-standard status, which serves as the rationale for this study. This systematic review and data synthesis aims to evaluate and report on the predictive capabilities of AI-based prediction models regarding IVF outcome. The study has been registered in PROSPERO (CRD42021242097). Following a systematic search of the literature in Pubmed/Medline, Embase, and Cochrane Central Library, 18 studies were identified as eligible for inclusion. Regarding live-birth, the Area Under the Curve (AUC) of the Summary Receiver Operating Characteristics (SROC) was 0.905, while the partial AUC (pAUC) was 0.755. The Observed: Expected ratio was 1.12 (95%CI: 0.26–2.37; 95%PI: 0.02–6.54). Regarding clinical pregnancy with fetal heartbeat, the AUC of the SROC was 0.722, while the pAUC was 0.774. The O:E ratio was 0.77 (95%CI: 0.54–1.05; 95%PI: 0.21–1.62). According to this data synthesis, the majority of the AI-based prediction models are successful in accurately predicting the IVF outcome regarding live birth, clinical pregnancy, clinical pregnancy with fetal heartbeat, and ploidy status. This review attempted to compare between AI and human prediction capabilities, and although studies do not allow for a meta-analysis, this systematic review indicates that the AI-based prediction models perform rather similarly to the embryologists’ evaluations. While AI models appear marginally more effective, they still have some way to go before they can claim to significantly surpass the clinical embryologists’ predictive competence. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)
Show Figures

Figure 1

Back to TopTop