Artificial Intelligence in Gastrointestinal Disease: Diagnosis and Management

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 41273

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Internal Medicine, Korea University College of Medicine, Seoul 02841, Republic of Korea
Interests: gastrointestinal disease; gastric cancer; colon cancer; inflammatory bowel disease

E-Mail Website
Guest Editor
AI Center, Korea University Anam Hospital, Seoul 02841, Republic of Korea
Interests: artificial intelligence; machine learning; deep learning; health informatics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

For every ten adults in the world, four suffer from gastrointestinal disorders of varying severity. Early diagnosis might be the best way to prevent and treat serious gastrointestinal diseases. Recently, attempts to use artificial intelligence (AI) for the early diagnosis of such gastrointestinal diseases have been explosively increasing, and it is reported that the potential is very promising. Therefore, we intend to publish a Special Issue on the diagnosis, treatment, and management of various gastrointestinal diseases such as GI tract cancer, inflammatory bowel disease, and endoscopy using AI technology, including various deep learning techniques.

Prof. Dr. Eun-Sun Kim
Prof. Dr. Kwang-Sig Lee
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (21 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

16 pages, 1697 KiB  
Article
Polypoid Lesion Segmentation Using YOLO-V8 Network in Wireless Video Capsule Endoscopy Images
by Ali Sahafi, Anastasios Koulaouzidis and Mehrshad Lalinia
Diagnostics 2024, 14(5), 474; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics14050474 - 22 Feb 2024
Viewed by 1128
Abstract
Gastrointestinal (GI) tract disorders are a significant public health issue. They are becoming more common and can cause serious health problems and high healthcare costs. Small bowel tumours (SBTs) and colorectal cancer (CRC) are both becoming more prevalent, especially among younger adults. Early [...] Read more.
Gastrointestinal (GI) tract disorders are a significant public health issue. They are becoming more common and can cause serious health problems and high healthcare costs. Small bowel tumours (SBTs) and colorectal cancer (CRC) are both becoming more prevalent, especially among younger adults. Early detection and removal of polyps (precursors of malignancy) is essential for prevention. Wireless Capsule Endoscopy (WCE) is a procedure that utilises swallowable camera devices that capture images of the GI tract. Because WCE generates a large number of images, automated polyp segmentation is crucial. This paper reviews computer-aided approaches to polyp detection using WCE imagery and evaluates them using a dataset of labelled anomalies and findings. The study focuses on YOLO-V8, an improved deep learning model, for polyp segmentation and finds that it performs better than existing methods, achieving high precision and recall. The present study underscores the potential of automated detection systems in improving GI polyp identification. Full article
Show Figures

Figure 1

16 pages, 10481 KiB  
Article
Gastro-BaseNet: A Specialized Pre-Trained Model for Enhanced Gastroscopic Data Classification and Diagnosis of Gastric Cancer and Ulcer
by Gi Pyo Lee, Young Jae Kim, Dong Kyun Park, Yoon Jae Kim, Su Kyeong Han and Kwang Gi Kim
Diagnostics 2024, 14(1), 75; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics14010075 - 28 Dec 2023
Viewed by 876
Abstract
Most of the development of gastric disease prediction models has utilized pre-trained models from natural data, such as ImageNet, which lack knowledge of medical domains. This study proposes Gastro-BaseNet, a classification model trained using gastroscopic image data for abnormal gastric lesions. To prove [...] Read more.
Most of the development of gastric disease prediction models has utilized pre-trained models from natural data, such as ImageNet, which lack knowledge of medical domains. This study proposes Gastro-BaseNet, a classification model trained using gastroscopic image data for abnormal gastric lesions. To prove performance, we compared transfer-learning based on two pre-trained models (Gastro-BaseNet and ImageNet) and two training methods (freeze and fine-tune modes). The effectiveness was verified in terms of classification at the image-level and patient-level, as well as the localization performance of lesions. The development of Gastro-BaseNet had demonstrated superior transfer learning performance compared to random weight settings in ImageNet. When developing a model for predicting the diagnosis of gastric cancer and gastric ulcers, the transfer-learned model based on Gastro-BaseNet outperformed that based on ImageNet. Furthermore, the model’s performance was highest when fine-tuning the entire layer in the fine-tune mode. Additionally, the trained model was based on Gastro-BaseNet, which showed higher localization performance, which confirmed its accurate detection and classification of lesions in specific locations. This study represents a notable advancement in the development of image analysis models within the medical field, resulting in improved diagnostic predictive accuracy and aiding in making more informed clinical decisions in gastrointestinal endoscopy. Full article
Show Figures

Figure 1

10 pages, 1076 KiB  
Communication
Evaluating the Utility of a Large Language Model in Answering Common Patients’ Gastrointestinal Health-Related Questions: Are We There Yet?
by Adi Lahat, Eyal Shachar, Benjamin Avidan, Benjamin Glicksberg and Eyal Klang
Diagnostics 2023, 13(11), 1950; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13111950 - 02 Jun 2023
Cited by 20 | Viewed by 2277
Abstract
Background and aims: Patients frequently have concerns about their disease and find it challenging to obtain accurate Information. OpenAI’s ChatGPT chatbot (ChatGPT) is a new large language model developed to provide answers to a wide range of questions in various fields. Our aim [...] Read more.
Background and aims: Patients frequently have concerns about their disease and find it challenging to obtain accurate Information. OpenAI’s ChatGPT chatbot (ChatGPT) is a new large language model developed to provide answers to a wide range of questions in various fields. Our aim is to evaluate the performance of ChatGPT in answering patients’ questions regarding gastrointestinal health. Methods: To evaluate the performance of ChatGPT in answering patients’ questions, we used a representative sample of 110 real-life questions. The answers provided by ChatGPT were rated in consensus by three experienced gastroenterologists. The accuracy, clarity, and efficacy of the answers provided by ChatGPT were assessed. Results: ChatGPT was able to provide accurate and clear answers to patients’ questions in some cases, but not in others. For questions about treatments, the average accuracy, clarity, and efficacy scores (1 to 5) were 3.9 ± 0.8, 3.9 ± 0.9, and 3.3 ± 0.9, respectively. For symptoms questions, the average accuracy, clarity, and efficacy scores were 3.4 ± 0.8, 3.7 ± 0.7, and 3.2 ± 0.7, respectively. For diagnostic test questions, the average accuracy, clarity, and efficacy scores were 3.7 ± 1.7, 3.7 ± 1.8, and 3.5 ± 1.7, respectively. Conclusions: While ChatGPT has potential as a source of information, further development is needed. The quality of information is contingent upon the quality of the online information provided. These findings may be useful for healthcare providers and patients alike in understanding the capabilities and limitations of ChatGPT. Full article
Show Figures

Figure 1

11 pages, 1045 KiB  
Article
Comparison of Machine Learning Models and the Fatty Liver Index in Predicting Lean Fatty Liver
by Pei-Yuan Su, Yang-Yuan Chen, Chun-Yu Lin, Wei-Wen Su, Siou-Ping Huang and Hsu-Heng Yen
Diagnostics 2023, 13(8), 1407; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13081407 - 13 Apr 2023
Cited by 1 | Viewed by 1444
Abstract
The reported prevalence of non-alcoholic fatty liver disease in studies of lean individuals ranges from 7.6% to 19.3%. The aim of the study was to develop machine-learning models for the prediction of fatty liver disease in lean individuals. The present retrospective study included [...] Read more.
The reported prevalence of non-alcoholic fatty liver disease in studies of lean individuals ranges from 7.6% to 19.3%. The aim of the study was to develop machine-learning models for the prediction of fatty liver disease in lean individuals. The present retrospective study included 12,191 lean subjects with a body mass index < 23 kg/m2 who had undergone a health checkup from January 2009 to January 2019. Participants were divided into a training (70%, 8533 subjects) and a testing group (30%, 3568 subjects). A total of 27 clinical features were analyzed, except for medical history and history of alcohol or tobacco consumption. Among the 12,191 lean individuals included in the present study, 741 (6.1%) had fatty liver. The machine learning model comprising a two-class neural network using 10 features had the highest area under the receiver operating characteristic curve (AUROC) value (0.885) among all other algorithms. When applied to the testing group, we found the two-class neural network exhibited a slightly higher AUROC value for predicting fatty liver (0.868, 0.841–0.894) compared to the fatty liver index (FLI; 0.852, 0.824–0.81). In conclusion, the two-class neural network had greater predictive value for fatty liver than the FLI in lean individuals. Full article
Show Figures

Figure 1

15 pages, 14755 KiB  
Article
Negative Samples for Improving Object Detection—A Case Study in AI-Assisted Colonoscopy for Polyp Detection
by Alba Nogueira-Rodríguez, Daniel Glez-Peña, Miguel Reboiro-Jato and Hugo López-Fernández
Diagnostics 2023, 13(5), 966; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13050966 - 03 Mar 2023
Cited by 1 | Viewed by 1966
Abstract
Deep learning object-detection models are being successfully applied to develop computer-aided diagnosis systems for aiding polyp detection during colonoscopies. Here, we evidence the need to include negative samples for both (i) reducing false positives during the polyp-finding phase, by including images with artifacts [...] Read more.
Deep learning object-detection models are being successfully applied to develop computer-aided diagnosis systems for aiding polyp detection during colonoscopies. Here, we evidence the need to include negative samples for both (i) reducing false positives during the polyp-finding phase, by including images with artifacts that may confuse the detection models (e.g., medical instruments, water jets, feces, blood, excessive proximity of the camera to the colon wall, blurred images, etc.) that are usually not included in model development datasets, and (ii) correctly estimating a more realistic performance of the models. By retraining our previously developed YOLOv3-based detection model with a dataset that includes 15% of additional not-polyp images with a variety of artifacts, we were able to generally improve its F1 performance in our internal test datasets (from an average F1 of 0.869 to 0.893), which now include such type of images, as well as in four public datasets that include not-polyp images (from an average F1 of 0.695 to 0.722). Full article
Show Figures

Figure 1

21 pages, 3096 KiB  
Article
A Multiscale Polyp Detection Approach for GI Tract Images Based on Improved DenseNet and Single-Shot Multibox Detector
by Meryem Souaidi, Samira Lafraxo, Zakaria Kerkaou, Mohamed El Ansari and Lahcen Koutti
Diagnostics 2023, 13(4), 733; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13040733 - 15 Feb 2023
Cited by 10 | Viewed by 1570
Abstract
Small bowel polyps exhibit variations related to color, shape, morphology, texture, and size, as well as to the presence of artifacts, irregular polyp borders, and the low illumination condition inside the gastrointestinal GI tract. Recently, researchers developed many highly accurate polyp detection models [...] Read more.
Small bowel polyps exhibit variations related to color, shape, morphology, texture, and size, as well as to the presence of artifacts, irregular polyp borders, and the low illumination condition inside the gastrointestinal GI tract. Recently, researchers developed many highly accurate polyp detection models based on one-stage or two-stage object detector algorithms for wireless capsule endoscopy (WCE) and colonoscopy images. However, their implementation requires a high computational power and memory resources, thus sacrificing speed for an improvement in precision. Although the single-shot multibox detector (SSD) proves its effectiveness in many medical imaging applications, its weak detection ability for small polyp regions persists due to the lack of information complementary between features of low- and high-level layers. The aim is to consecutively reuse feature maps between layers of the original SSD network. In this paper, we propose an innovative SSD model based on a redesigned version of a dense convolutional network (DenseNet) which emphasizes multiscale pyramidal feature maps interdependence called DC-SSDNet (densely connected single-shot multibox detector). The original backbone network VGG-16 of the SSD is replaced with a modified version of DenseNet. The DenseNet-46 front stem is improved to extract highly typical characteristics and contextual information, which improves the model’s feature extraction ability. The DC-SSDNet architecture compresses unnecessary convolution layers of each dense block to reduce the CNN model complexity. Experimental results showed a remarkable improvement in the proposed DC-SSDNet to detect small polyp regions achieving an mAP of 93.96%, F1-score of 90.7%, and requiring less computational time. Full article
Show Figures

Figure 1

0 pages, 6458 KiB  
Article
GAR-Net: Guided Attention Residual Network for Polyp Segmentation from Colonoscopy Video Frames
by Joel Raymann and Ratnavel Rajalakshmi
Diagnostics 2023, 13(1), 123; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13010123 - 30 Dec 2022
Cited by 1 | Viewed by 1502 | Correction
Abstract
Colorectal Cancer is one of the most common cancers found in human beings, and polyps are the predecessor of this cancer. Accurate Computer-Aided polyp detection and segmentation system can help endoscopists to detect abnormal tissues and polyps during colonoscopy examination, thereby reducing the [...] Read more.
Colorectal Cancer is one of the most common cancers found in human beings, and polyps are the predecessor of this cancer. Accurate Computer-Aided polyp detection and segmentation system can help endoscopists to detect abnormal tissues and polyps during colonoscopy examination, thereby reducing the chance of polyps growing into cancer. Many of the existing techniques fail to delineate the polyps accurately and produce a noisy/broken output map if the shape and size of the polyp are irregular or small. We propose an end-to-end pixel-wise polyp segmentation model named Guided Attention Residual Network (GAR-Net) by combining the power of both residual blocks and attention mechanisms to obtain a refined continuous segmentation map. An enhanced Residual Block is proposed that suppresses the noise and captures low-level feature maps, thereby facilitating information flow for a more accurate semantic segmentation. We propose a special learning technique with a novel attention mechanism called Guided Attention Learning that can capture the refined attention maps both in earlier and deeper layers regardless of the size and shape of the polyp. To study the effectiveness of the proposed GAR-Net, various experiments were carried out on two benchmark collections viz., CVC-ClinicDB (CVC-612) and Kvasir-SEG dataset. From the experimental evaluations, it is shown that GAR-Net outperforms other previously proposed models such as FCN8, SegNet, U-Net, U-Net with Gated Attention, ResUNet, and DeepLabv3. Our proposed model achieves 91% Dice co-efficient and 83.12% mean Intersection over Union (mIoU) on the benchmark CVC-ClinicDB (CVC-612) dataset and 89.15% dice co-efficient and 81.58% mean Intersection over Union (mIoU) on the Kvasir-SEG dataset. The proposed GAR-Net model provides a robust solution for polyp segmentation from colonoscopy video frames. Full article
Show Figures

Figure 1

21 pages, 6697 KiB  
Article
Diagnosis of Inflammatory Bowel Disease and Colorectal Cancer through Multi-View Stacked Generalization Applied on Gut Microbiome Data
by Sultan Imangaliyev, Jörg Schlötterer, Folker Meyer and Christin Seifert
Diagnostics 2022, 12(10), 2514; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12102514 - 17 Oct 2022
Cited by 1 | Viewed by 1768
Abstract
Most of the microbiome studies suggest that using ensemble models such as Random Forest results in best predictive power. In this study, we empirically evaluate a more powerful ensemble learning algorithm, multi-view stacked generalization, on pediatric inflammatory bowel disease and adult colorectal cancer [...] Read more.
Most of the microbiome studies suggest that using ensemble models such as Random Forest results in best predictive power. In this study, we empirically evaluate a more powerful ensemble learning algorithm, multi-view stacked generalization, on pediatric inflammatory bowel disease and adult colorectal cancer patients’ cohorts. We aim to check whether stacking would lead to better results compared to using a single best machine learning algorithm. Stacking achieves the best test set Average Precision (AP) on inflammatory bowel disease dataset reaching AP = 0.69, outperforming both the best base classifier (AP = 0.61) and the baseline meta learner built on top of base classifiers (AP = 0.63). On colorectal cancer dataset, the stacked classifier also outperforms (AP = 0.81) both the best base classifier (AP = 0.79) and the baseline meta learner (AP = 0.75). Stacking achieves best predictive performance on test set outperforming the best classifiers on both patient cohorts. Application of the stacking solves the issue of choosing the most appropriate machine learning algorithm by automating the model selection procedure. Clinical application of such a model is not limited to diagnosis task only, but it also can be extended to biomarker selection thanks to feature selection procedure. Full article
Show Figures

Figure 1

10 pages, 5829 KiB  
Article
Deep Learning for Automatic Differentiation of Mucinous versus Non-Mucinous Pancreatic Cystic Lesions: A Pilot Study
by Filipe Vilas-Boas, Tiago Ribeiro, João Afonso, Hélder Cardoso, Susana Lopes, Pedro Moutinho-Ribeiro, João Ferreira, Miguel Mascarenhas-Saraiva and Guilherme Macedo
Diagnostics 2022, 12(9), 2041; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12092041 - 24 Aug 2022
Cited by 9 | Viewed by 1602
Abstract
Endoscopic ultrasound (EUS) morphology can aid in the discrimination between mucinous and non-mucinous pancreatic cystic lesions (PCLs) but has several limitations that can be overcome by artificial intelligence. We developed a convolutional neural network (CNN) algorithm for the automatic diagnosis of mucinous PCLs. [...] Read more.
Endoscopic ultrasound (EUS) morphology can aid in the discrimination between mucinous and non-mucinous pancreatic cystic lesions (PCLs) but has several limitations that can be overcome by artificial intelligence. We developed a convolutional neural network (CNN) algorithm for the automatic diagnosis of mucinous PCLs. Images retrieved from videos of EUS examinations for PCL characterization were used for the development, training, and validation of a CNN for mucinous cyst diagnosis. The performance of the CNN was measured calculating the area under the receiving operator characteristic curve (AUC), sensitivity, specificity, and positive and negative predictive values. A total of 5505 images from 28 pancreatic cysts were used (3725 from mucinous lesions and 1780 from non-mucinous cysts). The model had an overall accuracy of 98.5%, sensitivity of 98.3%, specificity of 98.9% and AUC of 1. The image processing speed of the CNN was 7.2 ms per frame. We developed a deep learning algorithm that differentiated mucinous and non-mucinous cysts with high accuracy. The present CNN may constitute an important tool to help risk stratify PCLs. Full article
Show Figures

Figure 1

23 pages, 7799 KiB  
Article
Multi-Scale Hybrid Network for Polyp Detection in Wireless Capsule Endoscopy and Colonoscopy Images
by Meryem Souaidi and Mohamed El Ansari
Diagnostics 2022, 12(8), 2030; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12082030 - 22 Aug 2022
Cited by 13 | Viewed by 2062
Abstract
The trade-off between speed and precision is a key step in the detection of small polyps in wireless capsule endoscopy (WCE) images. In this paper, we propose a hybrid network of an inception v4 architecture-based single-shot multibox detector (Hyb-SSDNet) to detect small polyp [...] Read more.
The trade-off between speed and precision is a key step in the detection of small polyps in wireless capsule endoscopy (WCE) images. In this paper, we propose a hybrid network of an inception v4 architecture-based single-shot multibox detector (Hyb-SSDNet) to detect small polyp regions in both WCE and colonoscopy frames. Medical privacy concerns are considered the main barriers to WCE image acquisition. To satisfy the object detection requirements, we enlarged the training datasets and investigated deep transfer learning techniques. The Hyb-SSDNet framework adopts inception blocks to alleviate the inherent limitations of the convolution operation to incorporate contextual features and semantic information into deep networks. It consists of four main components: (a) multi-scale encoding of small polyp regions, (b) using the inception v4 backbone to enhance more contextual features in shallow and middle layers, and (c) concatenating weighted features of mid-level feature maps, giving them more importance to highly extract semantic information. Then, the feature map fusion is delivered to the next layer, followed by some downsampling blocks to generate new pyramidal layers. Finally, the feature maps are fed to multibox detectors, consistent with the SSD process-based VGG16 network. The Hyb-SSDNet achieved a 93.29% mean average precision (mAP) and a testing speed of 44.5 FPS on the WCE dataset. This work proves that deep learning has the potential to develop future research in polyp detection and classification tasks. Full article
Show Figures

Figure 1

11 pages, 3065 KiB  
Article
Performance of a Deep Learning System for Automatic Diagnosis of Protruding Lesions in Colon Capsule Endoscopy
by Miguel Mascarenhas, João Afonso, Tiago Ribeiro, Hélder Cardoso, Patrícia Andrade, João P. S. Ferreira, Miguel Mascarenhas Saraiva and Guilherme Macedo
Diagnostics 2022, 12(6), 1445; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12061445 - 12 Jun 2022
Cited by 8 | Viewed by 1932
Abstract
Background: Colon capsule endoscopy (CCE) is an alternative for patients unwilling or with contraindications for conventional colonoscopy. Colorectal cancer screening may benefit greatly from widespread acceptance of a non-invasive tool such as CCE. However, reviewing CCE exams is a time-consuming process, with risk [...] Read more.
Background: Colon capsule endoscopy (CCE) is an alternative for patients unwilling or with contraindications for conventional colonoscopy. Colorectal cancer screening may benefit greatly from widespread acceptance of a non-invasive tool such as CCE. However, reviewing CCE exams is a time-consuming process, with risk of overlooking important lesions. We aimed to develop an artificial intelligence (AI) algorithm using a convolutional neural network (CNN) architecture for automatic detection of colonic protruding lesions in CCE images. An anonymized database of CCE images collected from a total of 124 patients was used. This database included images of patients with colonic protruding lesions or patients with normal colonic mucosa or with other pathologic findings. A total of 5715 images were extracted for CNN development. Two image datasets were created and used for training and validation of the CNN. The AUROC for detection of protruding lesions was 0.99. The sensitivity, specificity, PPV and NPV were 90.0%, 99.1%, 98.6% and 93.2%, respectively. The overall accuracy of the network was 95.3%. The developed deep learning algorithm accurately detected protruding lesions in CCE images. The introduction of AI technology to CCE may increase its diagnostic accuracy and acceptance for screening of colorectal neoplasia. Full article
Show Figures

Figure 1

17 pages, 6193 KiB  
Article
Computer-Aided Image Enhanced Endoscopy Automated System to Boost Polyp and Adenoma Detection Accuracy
by Chia-Pei Tang, Chen-Hung Hsieh and Tu-Liang Lin
Diagnostics 2022, 12(4), 968; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12040968 - 12 Apr 2022
Cited by 1 | Viewed by 1918
Abstract
Colonoscopy is the gold standard to detect colon polyps prematurely. Early detection, characterization and resection of polyps decrease colon cancer incidence. Colon polyp missing rate remains high despite novel methods development. Narrowed-band imaging (NBI) is one of the image enhance techniques used to [...] Read more.
Colonoscopy is the gold standard to detect colon polyps prematurely. Early detection, characterization and resection of polyps decrease colon cancer incidence. Colon polyp missing rate remains high despite novel methods development. Narrowed-band imaging (NBI) is one of the image enhance techniques used to boost polyp detection and characterization, which uses special filters to enhance the contrast of the mucosa surface and vascular pattern of the polyp. However, the single-button-activated system is not convenient for a full-time colonoscopy operation. We selected three methods to simulate the NBI system: Color Transfer with Mean Shift (CTMS), Multi-scale Retinex with Color Restoration (MSRCR), and Gamma and Sigmoid Conversions (GSC). The results show that the classification accuracy using the original images is the lowest. All color transfer methods outperform the original images approach. Our results verified that the color transfer has a positive impact on the polyp identification and classification task. Combined analysis results of the mAP and the accuracy show an excellent performance of the MSRCR method. Full article
Show Figures

Figure 1

17 pages, 8258 KiB  
Article
Performance of Convolutional Neural Networks for Polyp Localization on Public Colonoscopy Image Datasets
by Alba Nogueira-Rodríguez, Miguel Reboiro-Jato, Daniel Glez-Peña and Hugo López-Fernández
Diagnostics 2022, 12(4), 898; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12040898 - 04 Apr 2022
Cited by 13 | Viewed by 2951
Abstract
Colorectal cancer is one of the most frequent malignancies. Colonoscopy is the de facto standard for precancerous lesion detection in the colon, i.e., polyps, during screening studies or after facultative recommendation. In recent years, artificial intelligence, and especially deep learning techniques such as [...] Read more.
Colorectal cancer is one of the most frequent malignancies. Colonoscopy is the de facto standard for precancerous lesion detection in the colon, i.e., polyps, during screening studies or after facultative recommendation. In recent years, artificial intelligence, and especially deep learning techniques such as convolutional neural networks, have been applied to polyp detection and localization in order to develop real-time CADe systems. However, the performance of machine learning models is very sensitive to changes in the nature of the testing instances, especially when trying to reproduce results for totally different datasets to those used for model development, i.e., inter-dataset testing. Here, we report the results of testing of our previously published polyp detection model using ten public colonoscopy image datasets and analyze them in the context of the results of other 20 state-of-the-art publications using the same datasets. The F1-score of our recently published model was 0.88 when evaluated on a private test partition, i.e., intra-dataset testing, but it decayed, on average, by 13.65% when tested on ten public datasets. In the published research, the average intra-dataset F1-score is 0.91, and we observed that it also decays in the inter-dataset setting to an average F1-score of 0.83. Full article
Show Figures

Figure 1

14 pages, 4565 KiB  
Article
Use of U-Net Convolutional Neural Networks for Automated Segmentation of Fecal Material for Objective Evaluation of Bowel Preparation Quality in Colonoscopy
by Yen-Po Wang, Ying-Chun Jheng, Kuang-Yi Sung, Hung-En Lin, I-Fang Hsin, Ping-Hsien Chen, Yuan-Chia Chu, David Lu, Yuan-Jen Wang, Ming-Chih Hou, Fa-Yauh Lee and Ching-Liang Lu
Diagnostics 2022, 12(3), 613; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12030613 - 01 Mar 2022
Cited by 5 | Viewed by 2825
Abstract
Background: Adequate bowel cleansing is important for colonoscopy performance evaluation. Current bowel cleansing evaluation scales are subjective, with a wide variation in consistency among physicians and low reported rates of accuracy. We aim to use machine learning to develop a fully automatic segmentation [...] Read more.
Background: Adequate bowel cleansing is important for colonoscopy performance evaluation. Current bowel cleansing evaluation scales are subjective, with a wide variation in consistency among physicians and low reported rates of accuracy. We aim to use machine learning to develop a fully automatic segmentation method for the objective evaluation of the adequacy of colon preparation. Methods: Colonoscopy videos were retrieved from a video data cohort and transferred to qualified images, which were randomly divided into training, validation, and verification datasets. The fecal residue was manually segmented. A deep learning model based on the U-Net convolutional network architecture was developed to perform automatic segmentation. The performance of the automatic segmentation was evaluated on the overlap area with the manual segmentation. Results: A total of 10,118 qualified images from 119 videos were obtained. The model averaged 0.3634 s to segmentate one image automatically. The models produced a strong high-overlap area with manual segmentation, with 94.7% ± 0.67% of that area predicted by our AI model, which correlated well with the area measured manually (r = 0.915, p < 0.001). The AI system can be applied in real-time qualitatively and quantitatively. Conclusions: We established a fully automatic segmentation method to rapidly and accurately mark the fecal residue-coated mucosa for the objective evaluation of colon preparation. Full article
Show Figures

Figure 1

11 pages, 12645 KiB  
Article
Accuracy and Efficiency of Right-Lobe Graft Weight Estimation Using Deep-Learning-Assisted CT Volumetry for Living-Donor Liver Transplantation
by Rohee Park, Seungsoo Lee, Yusub Sung, Jeeseok Yoon, Heung-Il Suk, Hyoungjung Kim and Sanghyun Choi
Diagnostics 2022, 12(3), 590; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12030590 - 25 Feb 2022
Cited by 6 | Viewed by 1747
Abstract
CT volumetry (CTV) has been widely used for pre-operative graft weight (GW) estimation in living-donor liver transplantation (LDLT), and the use of a deep-learning algorithm (DLA) may further improve its efficiency. However, its accuracy has not been well determined. To evaluate the efficiency [...] Read more.
CT volumetry (CTV) has been widely used for pre-operative graft weight (GW) estimation in living-donor liver transplantation (LDLT), and the use of a deep-learning algorithm (DLA) may further improve its efficiency. However, its accuracy has not been well determined. To evaluate the efficiency and accuracy of DLA-assisted CTV in GW estimation, we performed a retrospective study including 581 consecutive LDLT donors who donated a right-lobe graft. Right-lobe graft volume (GV) was measured on CT using the software implemented with the DLA for automated liver segmentation. In the development group (n = 207), a volume-to-weight conversion formula was constructed by linear regression analysis between the CTV-measured GV and the intraoperative GW. In the validation group (n = 374), the agreement between the estimated and measured GWs was assessed using the Bland–Altman 95% limit-of-agreement (LOA). The mean process time for GV measurement was 1.8 ± 0.6 min (range, 1.3–8.0 min). In the validation group, the GW was estimated using the volume-to-weight conversion formula (estimated GW [g] = 206.3 + 0.653 × CTV-measured GV [mL]), and the Bland–Altman 95% LOA between the estimated and measured GWs was −1.7% ± 17.1%. The DLA-assisted CT volumetry allows for time-efficient and accurate estimation of GW in LDLT. Full article
Show Figures

Figure 1

10 pages, 930 KiB  
Article
Machine Learning Model for Outcome Prediction of Patients Suffering from Acute Diverticulitis Arriving at the Emergency Department—A Proof of Concept Study
by Eyal Klang, Robert Freeman, Matthew A. Levin, Shelly Soffer, Yiftach Barash and Adi Lahat
Diagnostics 2021, 11(11), 2102; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11112102 - 13 Nov 2021
Cited by 3 | Viewed by 1968
Abstract
Background & Aims: We aimed at identifying specific emergency department (ED) risk factors for developing complicated acute diverticulitis (AD) and evaluate a machine learning model (ML) for predicting complicated AD. Methods: We analyzed data retrieved from unselected consecutive large bowel AD patients from [...] Read more.
Background & Aims: We aimed at identifying specific emergency department (ED) risk factors for developing complicated acute diverticulitis (AD) and evaluate a machine learning model (ML) for predicting complicated AD. Methods: We analyzed data retrieved from unselected consecutive large bowel AD patients from five hospitals from the Mount Sinai health system, NY. The study time frame was from January 2011 through March 2021. Data were used to train and evaluate a gradient-boosting machine learning model to identify patients with complicated diverticulitis, defined as a need for invasive intervention or in-hospital mortality. The model was trained and evaluated on data from four hospitals and externally validated on held-out data from the fifth hospital. Results: The final cohort included 4997 AD visits. Of them, 129 (2.9%) visits had complicated diverticulitis. Patients with complicated diverticulitis were more likely to be men, black, and arrive by ambulance. Regarding laboratory values, patients with complicated diverticulitis had higher levels of absolute neutrophils (AUC 0.73), higher white blood cells (AUC 0.70), platelet count (AUC 0.68) and lactate (AUC 0.61), and lower levels of albumin (AUC 0.69), chloride (AUC 0.64), and sodium (AUC 0.61). In the external validation cohort, the ML model showed AUC 0.85 (95% CI 0.78–0.91) for predicting complicated diverticulitis. For Youden’s index, the model showed a sensitivity of 88% with a false positive rate of 1:3.6. Conclusions: A ML model trained on clinical measures provides a proof of concept performance in predicting complications in patients presenting to the ED with AD. Clinically, it implies that a ML model may classify low-risk patients to be discharged from the ED for further treatment under an ambulatory setting. Full article
Show Figures

Figure 1

11 pages, 8300 KiB  
Article
Deep Learning Models for Poorly Differentiated Colorectal Adenocarcinoma Classification in Whole Slide Images Using Transfer Learning
by Masayuki Tsuneki and Fahdi Kanavati
Diagnostics 2021, 11(11), 2074; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11112074 - 09 Nov 2021
Cited by 15 | Viewed by 2615
Abstract
Colorectal poorly differentiated adenocarcinoma (ADC) is known to have a poor prognosis as compared with well to moderately differentiated ADC. The frequency of poorly differentiated ADC is relatively low (usually less than 5% among colorectal carcinomas). Histopathological diagnosis based on endoscopic biopsy specimens [...] Read more.
Colorectal poorly differentiated adenocarcinoma (ADC) is known to have a poor prognosis as compared with well to moderately differentiated ADC. The frequency of poorly differentiated ADC is relatively low (usually less than 5% among colorectal carcinomas). Histopathological diagnosis based on endoscopic biopsy specimens is currently the most cost effective method to perform as part of colonoscopic screening in average risk patients, and it is an area that could benefit from AI-based tools to aid pathologists in their clinical workflows. In this study, we trained deep learning models to classify poorly differentiated colorectal ADC from Whole Slide Images (WSIs) using a simple transfer learning method. We evaluated the models on a combination of test sets obtained from five distinct sources, achieving receiver operating characteristic curve (ROC) area under the curves (AUCs) up to 0.95 on 1799 test cases. Full article
Show Figures

Figure 1

Review

Jump to: Research, Other

21 pages, 721 KiB  
Review
Advancing Colorectal Cancer Diagnosis with AI-Powered Breathomics: Navigating Challenges and Future Directions
by Ioannis K. Gallos, Dimitrios Tryfonopoulos, Gidi Shani, Angelos Amditis, Hossam Haick and Dimitra D. Dionysiou
Diagnostics 2023, 13(24), 3673; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13243673 - 15 Dec 2023
Cited by 1 | Viewed by 1253
Abstract
Early detection of colorectal cancer is crucial for improving outcomes and reducing mortality. While there is strong evidence of effectiveness, currently adopted screening methods present several shortcomings which negatively impact the detection of early stage carcinogenesis, including low uptake due to patient discomfort. [...] Read more.
Early detection of colorectal cancer is crucial for improving outcomes and reducing mortality. While there is strong evidence of effectiveness, currently adopted screening methods present several shortcomings which negatively impact the detection of early stage carcinogenesis, including low uptake due to patient discomfort. As a result, developing novel, non-invasive alternatives is an important research priority. Recent advancements in the field of breathomics, the study of breath composition and analysis, have paved the way for new avenues for non-invasive cancer detection and effective monitoring. Harnessing the utility of Volatile Organic Compounds in exhaled breath, breathomics has the potential to disrupt colorectal cancer screening practices. Our goal is to outline key research efforts in this area focusing on machine learning methods used for the analysis of breathomics data, highlight challenges involved in artificial intelligence application in this context, and suggest possible future directions which are currently considered within the framework of the European project ONCOSCREEN. Full article
Show Figures

Figure 1

11 pages, 841 KiB  
Review
Explainable Artificial Intelligence in the Early Diagnosis of Gastrointestinal Disease
by Kwang-Sig Lee and Eun Sun Kim
Diagnostics 2022, 12(11), 2740; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12112740 - 09 Nov 2022
Cited by 6 | Viewed by 1645
Abstract
This study reviews the recent progress of explainable artificial intelligence for the early diagnosis of gastrointestinal disease (GID). The source of data was eight original studies in PubMed. The search terms were “gastrointestinal” (title) together with “random forest” or ”explainable artificial intelligence” (abstract). [...] Read more.
This study reviews the recent progress of explainable artificial intelligence for the early diagnosis of gastrointestinal disease (GID). The source of data was eight original studies in PubMed. The search terms were “gastrointestinal” (title) together with “random forest” or ”explainable artificial intelligence” (abstract). The eligibility criteria were the dependent variable of GID or a strongly associated disease, the intervention(s) of artificial intelligence, the outcome(s) of accuracy and/or the area under the receiver operating characteristic curve (AUC), the outcome(s) of variable importance and/or the Shapley additive explanations (SHAP), a publication year of 2020 or later, and the publication language of English. The ranges of performance measures were reported to be 0.70–0.98 for accuracy, 0.04–0.25 for sensitivity, and 0.54–0.94 for the AUC. The following factors were discovered to be top-10 predictors of gastrointestinal bleeding in the intensive care unit: mean arterial pressure (max), bicarbonate (min), creatinine (max), PMN, heart rate (mean), Glasgow Coma Scale, age, respiratory rate (mean), prothrombin time (max) and aminotransferase aspartate (max). In a similar vein, the following variables were found to be top-10 predictors for the intake of almond, avocado, broccoli, walnut, whole-grain barley, and/or whole-grain oat: Roseburia undefined, Lachnospira spp., Oscillibacter undefined, Subdoligranulum spp., Streptococcus salivarius subsp. thermophiles, Parabacteroides distasonis, Roseburia spp., Anaerostipes spp., Lachnospiraceae ND3007 group undefined, and Ruminiclostridium spp. Explainable artificial intelligence provides an effective, non-invasive decision support system for the early diagnosis of GID. Full article
Show Figures

Figure 1

14 pages, 2057 KiB  
Review
Artificial Intelligence and Machine Learning in the Diagnosis and Management of Gastroenteropancreatic Neuroendocrine Neoplasms—A Scoping Review
by Athanasios G. Pantelis, Panagiota A. Panagopoulou and Dimitris P. Lapatsanis
Diagnostics 2022, 12(4), 874; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12040874 - 31 Mar 2022
Cited by 7 | Viewed by 2489
Abstract
Neuroendocrine neoplasms (NENs) and tumors (NETs) are rare neoplasms that may affect any part of the gastrointestinal system. In this scoping review, we attempt to map existing evidence on the role of artificial intelligence, machine learning and deep learning in the diagnosis and [...] Read more.
Neuroendocrine neoplasms (NENs) and tumors (NETs) are rare neoplasms that may affect any part of the gastrointestinal system. In this scoping review, we attempt to map existing evidence on the role of artificial intelligence, machine learning and deep learning in the diagnosis and management of NENs of the gastrointestinal system. After implementation of inclusion and exclusion criteria, we retrieved 44 studies with 53 outcome analyses. We then classified the papers according to the type of studied NET (26 Pan-NETs, 59.1%; 3 metastatic liver NETs (6.8%), 2 small intestinal NETs, 4.5%; colorectal, rectal, non-specified gastroenteropancreatic and non-specified gastrointestinal NETs had from 1 study each, 2.3%). The most frequently used AI algorithms were Supporting Vector Classification/Machine (14 analyses, 29.8%), Convolutional Neural Network and Random Forest (10 analyses each, 21.3%), Random Forest (9 analyses, 19.1%), Logistic Regression (8 analyses, 17.0%), and Decision Tree (6 analyses, 12.8%). There was high heterogeneity on the description of the prediction model, structure of datasets, and performance metrics, whereas the majority of studies did not report any external validation set. Future studies should aim at incorporating a uniform structure in accordance with existing guidelines for purposes of reproducibility and research quality, which are prerequisites for integration into clinical practice. Full article
Show Figures

Figure 1

Other

Jump to: Research, Review

14 pages, 1788 KiB  
Systematic Review
Artificial Intelligence in Colon Capsule Endoscopy—A Systematic Review
by Sarah Moen, Fanny E. R. Vuik, Ernst J. Kuipers and Manon C. W. Spaander
Diagnostics 2022, 12(8), 1994; https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12081994 - 17 Aug 2022
Cited by 12 | Viewed by 2117
Abstract
Background and aims: The applicability of colon capsule endoscopy in daily practice is limited by the accompanying labor-intensive reviewing time and the risk of inter-observer variability. Automated reviewing of colon capsule endoscopy images using artificial intelligence could be timesaving while providing an [...] Read more.
Background and aims: The applicability of colon capsule endoscopy in daily practice is limited by the accompanying labor-intensive reviewing time and the risk of inter-observer variability. Automated reviewing of colon capsule endoscopy images using artificial intelligence could be timesaving while providing an objective and reproducible outcome. This systematic review aims to provide an overview of the available literature on artificial intelligence for reviewing colonic mucosa by colon capsule endoscopy and to assess the necessary action points for its use in clinical practice. Methods: A systematic literature search of literature published up to January 2022 was conducted using Embase, Web of Science, OVID MEDLINE and Cochrane CENTRAL. Studies reporting on the use of artificial intelligence to review second-generation colon capsule endoscopy colonic images were included. Results: 1017 studies were evaluated for eligibility, of which nine were included. Two studies reported on computed bowel cleansing assessment, five studies reported on computed polyp or colorectal neoplasia detection and two studies reported on other implications. Overall, the sensitivity of the proposed artificial intelligence models were 86.5–95.5% for bowel cleansing and 47.4–98.1% for the detection of polyps and colorectal neoplasia. Two studies performed per-lesion analysis, in addition to per-frame analysis, which improved the sensitivity of polyp or colorectal neoplasia detection to 81.3–98.1%. By applying a convolutional neural network, the highest sensitivity of 98.1% for polyp detection was found. Conclusion: The use of artificial intelligence for reviewing second-generation colon capsule endoscopy images is promising. The highest sensitivity of 98.1% for polyp detection was achieved by deep learning with a convolutional neural network. Convolutional neural network algorithms should be optimized and tested with more data, possibly requiring the set-up of a large international colon capsule endoscopy database. Finally, the accuracy of the optimized convolutional neural network models need to be confirmed in a prospective setting. Full article
Show Figures

Figure 1

Back to TopTop