Next Article in Journal
MicroRNAs in Hypertrophic, Arrhythmogenic and Dilated Cardiomyopathy
Next Article in Special Issue
Usefulness of Endoscopic Ultrasound with the Jelly-Filling Method for Esophageal Varices
Previous Article in Journal
Comparison of Different Machine Learning Classifiers for Glaucoma Diagnosis Based on Spectralis OCT
Previous Article in Special Issue
Clinical Efficacy of Novel Patient-Covering Negative-Pressure Box for Shielding Virus Transmission during Esophagogastroduodenoscopy: A Prospective Observational Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A New Dawn for the Use of Artificial Intelligence in Gastroenterology, Hepatology and Pancreatology

Department of Internal Medicine II, Faculty of Medicine, Shimane University, Izumo 693-8501, Shimane, Japan
*
Author to whom correspondence should be addressed.
Diagnostics 2021, 11(9), 1719; https://doi.org/10.3390/diagnostics11091719
Submission received: 31 August 2021 / Revised: 17 September 2021 / Accepted: 17 September 2021 / Published: 19 September 2021
(This article belongs to the Special Issue The Next Generation of Upper Gastrointestinal Endoscopy)

Abstract

:
Artificial intelligence (AI) is rapidly becoming an essential tool in the medical field as well as in daily life. Recent developments in deep learning, a subfield of AI, have brought remarkable advances in image recognition, which facilitates improvement in the early detection of cancer by endoscopy, ultrasonography, and computed tomography. In addition, AI-assisted big data analysis represents a great step forward for precision medicine. This review provides an overview of AI technology, particularly for gastroenterology, hepatology, and pancreatology, to help clinicians utilize AI in the near future.

1. Introduction

Rapid developments in artificial intelligence (AI) technologies bring huge benefits to daily life through smartphones (iPhone’s Siri, etc.), wearables (smart watches, etc.), and robotic assistants (smart speakers, self-driving cars, etc.) [1,2]. In the medical field, AI also holds great promise. Major advances in medical AI have had a tremendous impact at two main levels: (1) image recognition and (2) big data analysis. AI can detect very small changes that are difficult for humans to perceive. For example, AI can detect lung cancer up to a year before a physician [3], and AI can correctly diagnose skin cancer with superior diagnostic performance compared to that of a physician [4]. In addition, AI can reach the desired output within seconds and with more “consistent” performance. Doctors may have “inconsistent” performance due to insufficient training or exhaustion from busy clinical demands. A visual assessment by imaging physicians is qualitative, subjective, and prone to errors, and subject to intra-observer and inter-observer variability. AI may have better performance than physicians in some cases [5], and it has great promise to reduce clinician workload and the cost of medical care. However, it is necessary for clinicians to verify the output from AI for patient care.
In addition to image analysis, big data analysis is suitable for AI to generalize across a variety of data types and to provide interpretation across complex variables [6]. Therefore, AI techniques have been widely applied to big data analyses, such as in genomics, novel medicine discoveries, and predictions of disease outcomes [7,8,9]. For example, IBM Watson supports oncologists by providing possible therapeutic options based on information from over 300 medical journals, over 200 academic books, and over 15,000,000 pages of literature related to 11 types of neoplasia [10,11]. In the field of gastroenterology, AI has also made remarkable progress, and many international meetings highlight AI-related sessions. In addition, several new conferences have been established over the past few decades, such as the Global GI-AI Summit [12]. Owing to the potential for image recognition and big data analysis, not only clinician, but also researchers can benefit from the application of AI methodologies. This review focuses on recent AI research in the fields of gastroenterology, hepatology, and pancreatology (summarized in Figure 1) and provides an overview of AI technology to help clinicians utilize AI in the near future.

2. Artificial Intelligence

AI is “a broad discipline with the goal of creating intelligent machines, as opposed to the natural intelligence that is demonstrated by humans and animals” (from the state of AI report 2020) [1]. In 1950, Alan Turing published a landmark paper describing the creation of machines that “think” [13]. In 1955, John McCarthy et al. used the word “artificial intelligence” for the first time in a proposal for the Dartmouth Conference held in 1956 [14], which is considered the dawn of AI technology. In 1959, Arthur Samuel developed an algorithm for machine learning, a subfield of AI, which referred to a computer’s ability to learn from data in order to detect patterns and make decisions without explicitly being programmed for the output [15,16]. Before learning algorithms were developed, humans alone were required to analyze data and program machines with human-designed algorithms. In contrast, AI can automatically detect patterns and attributes from data and make decisions without human input.
An integral breakthrough in AI technology came in 2012, when deep learning, a new type of machine learning, was developed by Geoffrey Hinton et al. [17]. The authors presented a dramatically improved error rate for visual recognition at a competition conference, the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), jointly held by multiple universities in the United States [18]. Hinton’s team at the University of Toronto used deep learning for the first time to improve the error rate by about 10%. The network used was a convolutional neural network (CNN) called AlexNet, which has since been widely applied for image recognition tasks [19]. Deep learning uses a system called a neural network, which imitates the neuronal network of the human brain and combines different mathematical models. The input layer and the output layer are not sufficient to process complex information (Figure 2A), and more sophisticated analyses can be performed by creating intermediate layers between them. This increase in the number of intermediate layers is expressed as deep = deep, and deep learning is a computer processing system that has many such intermediate layers (Figure 2B). The layer is composed of a filter that extracts features from the original images to determine the characteristics of the original images where higher level features are extracted from lower level ones: for example, the first layer extracts patterns at the texture level, the second layer extracts patterns at the frame level, the third layer extracts at the shape level, and the last layer indicates a list of parts in the original input image. Notably, the filter is automatically created after recognition of the features through learning from the input data (see details and examples in [18,20,21,22,23]. This breakthrough in deep learning was facilitated by advances in graphic processing units (GPUs), which were faster than central processing units (CPUs) for real-time graphics and multitasking [18]. In 2015, AI outperformed humans in the ILSVRC. Another example to illustrate the outstanding performance of deep learning was indicated by AlphaGo, a deep learning algorithm to win the game “Go” [24]. These attractive developments in deep learning have greatly contributed to the proliferation of studies, which have attempted to automate the interpretation and evaluation of medical images and clinical data, and have expanded the application of AI to various fields. Indeed, over 10,000 papers in the medical field were published last year (Figure 2C). Based on these recent developments in AI technology, the U.S. Food and Drug Administration (FDA) enacted a law to approve medical AI devices in December 2016. In April 2018, the first AI device was approved to provide screening decisions without the assistance of a clinician’s interpretation for diabetic retinopathy in adults with diabetes [25]. To date, several AI-aided devices have been approved by the FDA and the European Union (EU) in the field of gastroenterology, hepatology, and pancreatology (Table 1).

3. Pharyngeal Cancer

Pharyngeal cancer is generally detected by otolaryngologists, and the majority of pharyngeal cancer patients are diagnosed at an advanced stage, resulting in a poor disease prognosis [26]. Therefore, early detection is critical to improve the survival rate of pharyngeal cancer patients. With recent advances in endoscopic technology, such as narrow-band imaging (NBI) and magnifying endoscopy, not only otolaryngologist, but even gastrointestinal endoscopist, are able to detect laryngopharyngeal cancers at an early stage [27,28,29]. A few reports regarding the AI-aided detection of pharyngeal cancers have been published by nasopharyngiologists and gastroenterologists [30,31,32,33]. Tamashiro et al. trained an AI-aided endoscopy system with 5403 images of superficial and advanced pharyngeal cancers and validated the system with 1912 images of cancers and non-cancers [32] (Figure 3A). The AI system correctly detected all cancers (even those smaller than 10 mm), and the pictures from NBI provided to the AI resulted in a much higher sensitivity (85.6%) than that from white-light endoscopy (70.1%). In actual clinical care situations, “real-time” detection is more practical and effective. Kono et al. developed a real-time detection system [33,34] that diagnosed 23/25 pharyngeal cancers as cancers (sensitivity: 92%) and 17/36 non-cancers as non-cancers (specificity: 47%) in a validation study, which used video images with a high transaction speed of 0.03 s per image. They theorized that the pseudo-positive or negative cases were due to the complex environment of the laryngopharyngeal area, including things such as saliva, bubbles, blurring, and inadequate filming conditions. Further improvements in the AI system with a variety of training images from normal and pharyngeal cancer patients are needed.

4. Upper Gastrointestinal Diseases

The overall survival rates for upper gastrointestinal cancers are poor, since many are diagnosed at advanced stages [38]. However, if detected early, the five-year survival rates exceed 90% [39,40,41]. For the early detection of neoplasms, endoscopists should pay attention to very small changes in the mucosa. Unfortunately the detection rates and accuracy of endoscopic diagnosis depends largely on the endoscopists’ experience [42]. AI-aided detection systems are therefore a hopeful and promising tool in this field.

4.1. Esophageal Cancer

Esophageal cancer is the seventh most common neoplasm and sixth most deadly cancer worldwide [43]. Squamous cell carcinoma (SCC) is the most common tumor type of all esophageal cancers [43], and several AI systems to detect SCCs have been reported [44,45,46,47,48]. Recently, Tokai et al. demonstrated that AI using a CNN detected 95.5% (279/291) of SCCs in 10 s [44]. They also showed that NBI was more sensitive than white-light imaging, which is consistent with previous reports [45]. In addition, they demonstrated that the AI correctly estimated the invasion depth with a sensitivity of 84.1% and an accuracy of 80.9%, which was higher than that of endoscopists. In addition, several reports showed that magnified endoscopy enhanced the accuracy of the depth diagnosis [46,47]. Interestingly, the AI performance was the same as that of the experts. In clinical endoscopy practice, ‘real-time’ diagnosis is required for AI-aided endoscopy. In a multicenter case-control study, Luo et al. validated an established AI system, known as GRAIDS, which was trained with 1,036,496 endoscopic images, and demonstrated high sensitivity, specificity, and accuracy [48]. This is one of the largest studies in the field of AI for medical applications.
While the majority of esophageal cancers are SCCs, the incidence of adenocarcinoma in the esophagus is increasing rapidly in Europe and North America [49]. Several reports have been published for the detection of adenocarcinoma using AI methodologies [50,51,52,53,54,55,56,57,58]. An AI system with white-light endoscopy developed by de Groof et al., detected Barrett’s neoplasia with high performance (a sensitivity of 95%, a specificity of 85%, and an accuracy of 92%) [50]. Subsequently, they developed an AI algorithm with multi-step training and successfully improved the accuracy of AI detection for Barrett’s neoplasia over the performance of endoscopists [51]. Recently, Hashimoto et al. used a high-speed real-time AI detection algorithm and demonstrated high sensitivity (96.4%), specificity (94.2%), and accuracy (95.4%) for the detection of early neoplasia on Barrett’s esophagus [52]. A recent meta-analysis by Arribas et al. showed that AI-aided endoscopy can detect both types of esophageal neoplasia, SCC and adenocarcinoma, with high sensitivity (approximately 90%) and accuracy (AUC approximately 0.95) [53], indicating that an AI system is a promising tool to avoid missing neoplasia during endoscopy.

4.2. Gastric Cancer

Gastric cancer is the fourth most lethal cancer worldwide [43]. As with the other gastrointestinal cancers described above, early detection is critical to improve survival rates [59]. In 2018, Hirasawa et al. first reported a novel AI-aided (computer-aided) diagnostic system for the detection of gastric cancer using a deep learning CNN [35] (Figure 3B). In total, 13,584 endoscopic images of gastric cancer as well as non-cancer images were collected to train the AI system. For verification of the diagnostic accuracy, 2296 endoscopic images of 69 consecutive cases of gastric cancer (77 lesions) were used. The trained AI detected 92.2% of gastric cancer lesions. Using another CNN algorithm, Wu et al. demonstrated higher performances in the AI group than those of expert endoscopists (accuracy 92.5% vs. 89.7%, sensitivity 94% vs. 93.9%, specificity 91% vs. 87.3%) [60]. In addition to these “still” image detection methodologies, Horiuchi et al. developed an AI to enable “real-time” diagnosis using magnifying endoscopy with NBI [61]. The AI system demonstrated an accuracy of 85.1%, a sensitivity of 87.4%, and a specificity of 82.8%, which was significantly more accurate than two experts. More recently, they employed a larger number of experts (67 endoscopists) to determine whether the performance of the AI detection system is better than that of endoscopists [62]. The AI system detected a greater number of early gastric cancer cases in a shorter time than the endoscopists with a significantly higher sensitivity of 58.4% versus 31.9%, respectively. Although the accuracy of the system was slightly lower than that of the experts, and requires further training and adjustments, it presents a promising tool to detect early cancer lesions.
Since AI is highly sensitive in image recognition, there can be misdiagnoses. To improve the accuracy of the diagnosis for cancer versus non-cancer, several reports have been published. In a study by Hirasawa et al., most of the misdiagnoses by AI were gastritis diagnosed as gastric cancer mainly due to the high sensitivity of the AI [35]. Horiuchi et al. established AI-aided magnifying endoscopy using NBI and demonstrated that gastritis could be distinguished from gastric cancer with a correct diagnostic rate of 85.3% [63,64]. Another color-enhanced imaging modality, flexible spectral imaging color enhancement (FICE) can also be used for the AI-aided detection of gastric cancer. Miyaki et al. used a support vector machine, which includes machine learning with training and validation images, and found that the system yielded a detection accuracy of 85.9%, a sensitivity of 84.8%, and a specificity of 87.0% [65]. Furthermore, for gastritis, the delineation of cancerous regions can be challenging. Kanesaka et al. first reported the introduction of AI for the diagnosis of gastric cancer [66]. The diagnosis by AI showed relatively good results with a sensitivity of 65.5%, a specificity of 80.8%, and a correct diagnostic rate of 73.8%. Kubota et al. developed an AI for the diagnosis of the invasion depth using a neural network and demonstrated accuracies of 77%, 49%, 51%, and 55% for T1, T2, T3, and T4 stages, respectively [67]. Zhu et al. also developed an AI for the diagnosis of the invasion depth in gastric cancer. For the diagnosis, which can distinguish a depth of M, SM1, SM2, or deeper, for all gastric cancers, including advanced stages, the sensitivity, specificity, and accuracy for AI were 76.5%, 95.6%, and 89.1%, respectively, and for endoscopists, they were 87.8%, 63.3%, and 71.5%, respectively [68]. Yoon et al. reported an AI that could classify early gastric cancer into intramucosal or submucosal cancers, with an area under the curve (AUC) of 0.851 [69]. Furthermore, they found that the factor that contributed most to the AI prediction of tumor depth was histologic differentiation. Undifferentiated-type histology corresponded to a lower AI accuracy.

4.3. Helicobacter pylori Infection and Gastric Atrophy

Helicobacter pylori infection followed by gastric atrophy is an important cause of gastric cancer [70]. Early diagnosis and management of H. pylori infection and gastric atrophy is a key strategy to reduce gastric cancer-related death. However, the diagnosis of H. pylori infection based on endoscopic findings remains a subjective process, which greatly depends on the competence of the treating physician, and the accuracy of diagnosis varies widely [71]. Shichijo et al. first developed an AI system for the diagnosis of H. pylori-induced gastritis, using 32,208 white-light endoscopic images from 1768 patients both H. pylori positive and negative for training [72]. Interestingly, the AI exceeded the performance of the endoscopists to diagnose H. pylori infection. In addition, given that the detection of H. pylori infection includes current infection and successful eradication therapy (post-eradication), the authors [73] and another group [74] trained an AI system with cases that included current infection, no infection, and post-eradication. These studies demonstrated a similar diagnostic performance compared to that of endoscopists, with a correct diagnostic rate of 84.2% for no infection, 82.5% for current infection, and 79.2% for post-treatment resolution [74]. In a more recent study, Nakahira et al. developed a unique AI system to evaluate the risk of gastric cancer [75]. The AI was trained on images of high-risk (patients with gastric cancer), moderate-risk (patients with current or past H. pylori infection or gastric atrophy), or low-risk (patients with no history of H. pylori infection or gastric atrophy) patients. The trained system successfully stratified the risk of cancer for the low-, moderate-, and high-risk patients, who were diagnosed by the AI as having gastric cancer at 2.2%, 8.8%, and 16.4%, respectively.

4.4. Upper Gastrointestinal Bleeding

In addition to image analysis above, AI can be applied to big data analysis to predict disease outcomes. For acute upper gastrointestinal bleeding, a systematic review by Shung et al., which included 14 studies with 30 assessments of machine learning models, revealed that AI performance was better than validated clinical risk scores to predict mortality from upper gastrointestinal bleeding [76]. Then, the authors published an excellent risk scoring system using machine learning models with a greater AUC, higher levels of specificity, and a 100% sensitivity compared to the clinical risk scores [77].

4.5. Quality Control

Blind spots potentially exist, even if endoscopists intend to observe the entire stomach, which is a cause of missed gastric cancer [78]. Wu et al. established a real-time quality improvement system, named WISENSE (wise + sense), and conducted a randomized controlled trial of 324 patients to confirm the comprehensiveness of the real-time imaging for the entire stomach. The study findings indicated that the AI reduced imaging omissions by 15% [60,79]. Using a similar AI, Chen et al. conducted a randomized controlled trial comparing six groups, including the presence or absence of sedation, normal-diameter or small-diameter endoscope, and with or without AI. They reported that normal-diameter endoscopy, with AI, and under sedation resulted in significantly fewer omissions [80]. An AI system should be able to detect cancer even under less than ideal conditions because suboptimal conditions are quite common in daily medical practice, particularly in pharyngeal areas. Normal images under such “real life” conditions are needed for AI training.

5. Gastrointestinal Stromal Tumor (GIST)

Large GISTs often show various findings on endoscopy and endoscopic ultrasonography (EUS), which makes it challenging for clinicians to distinguish GISTs from other submucosal tumors (SMTs). Minoda et al. reported the first study to evaluate the ability of AI to diagnose SMTs by EUS images. The AI-aided EUS showed a good diagnostic capability for large SMTs (≥20 mm) with a sensitivity of 91.7%, a specificity of 83.3%, and an accuracy of 90.0%, which were better than those of the EUS experts, (50.0%, 83.3%, and 53.3%, respectively) [81]. The AUC of the AI-aided EUS for large SMTs was 0.965, which was significantly higher than that of the EUS expert readers (0.684). In the future, with the help of the AI-aided EUS, non-experts might be able to make a differential diagnosis of GIST with the same or higher accuracy than that of EUS experts and without an invasive sampling process.

6. Duodenal and Small Intestinal Lesions

Duodenal neoplasia is relatively rare and sometimes missed during upper gastrointestinal endoscopy. Inoue et al. pretrained an AI system (deep learning CNN) with many cases of duodenal neoplasia (65 adenomas, 31 high-grade dysplasias) and showed that the system could detect duodenal neoplasia (sensitivity 94.7%), although there were some false positives (12.6%) probably due to a peristalsis-related raised fold [82]. The diagnostic ability of video capsule endoscopy (VCE) for small intestinal lesions is as high as 63%, which is superior to push endoscopy (single or double balloon endoscopy) [83]. VCE produces large amounts of data (over 50,000 images), which require considerable time for manual review by clinicians (30–120 min) [84,85]. Time-saving approaches are needed [86]. AI is a promising tool for this, and several studies have been performed and summarized previously [87]. Small intestinal bleeds are the most frequent indication for the use of VCE. Although commercially available reading systems include blood content enhancement algorithms, referred to as “suspected blood indicators” (SBIs), the false positive rate is still high at over 70% [88]. Xiao et al. and Hassan et al. developed AI algorithms for the detection of bleeds with high sensitivity and specificity (99%) [23,89]. Aoki et al. also developed a novel AI-based blood detection algorithm with high sensitivity, specificity, and accuracy (96.6%, 99.9%, and 99.8%, respectively), which were significantly higher than those of the SBI (76.9%, 99.8%, and 99.3%, respectively) [90]. They also showed the utility of an AI-based system for various small intestinal lesions (erosion, ulcer, angioectasia, and protruding lesions) in their multiple clinical studies [36,91,92] (Figure 3C). Hopefully, these novel AI algorithms will reduce the reading time for clinicians in the near future [93]. However, there are some limitations for developing AI-aided VCE, since small intestinal diseases are rather rare, and it is difficult to obtain sufficient large data sets for training. In addition, the VCE images may contain many artifacts (dark and red) and other objects (bile, food, air bubbles, etc.). There is a need for large collaborative databases to develop more precise systems.

7. Colon Cancer and Polyps

Colorectal cancer is the second most lethal cancer worldwide [43]. The total removal of colorectal adenomas by colonoscopy (clean colon) can reduce colorectal cancer deaths by 53% [94]. It is well known that approximately 20–50% of colorectal polyps are overlooked [95,96]. This incidence might be affected by the skill and fatigue of the endoscopist. Recent developments in deep learning algorithms have improved the detection sensitivity and specificity of AI-aided colonoscopy (in other words, computer-aided detection (CADe)). Using a deep learning algorithm, Misawa et al. first reported real-time detection for colon polyps, with a sensitivity of 90% and a specificity of 63.3% [97]. Urban et al. improved the specificity to 93% with a sensitivity of 93% using a wider variety of images (4088 unique polyps) for training [98]. In a more resent study, they demonstrated that AI-aided colonoscopy trained by more images (56,668 images) detected polyps with a higher sensitivity (98%) and an improved specificity (93%) using a novel publicly accessible video database (entitled SUN-database: http://amed8k.sundatabase.org/ (accessed on 19 September 2021)) they established [99]. The first randomized, controlled trial was conducted by Wang et al., in which a total of 1058 patients (536 standard colonoscopies and 522 computer-aided colonoscopies) were included [100]. The AI-aided colonoscopy significantly increased the adenoma detection rate (53% in the AI group versus 31% in the control group). Recently, the same group conducted high-quality studies, including a double-blind randomized trial with an AI–colonoscopy system compared to a sham system, and demonstrated that the adenoma detection rate was significantly higher in the AI-colonoscopy group (34%, 165/484) than in the sham group (28%, 132/478) [101]. The adenoma miss rate was significantly lower in the AI–colonoscopy group compared to a routine colonoscopy (13.8% vs. 40.0%) [102]. They mentioned that the characteristic profiles of the polyps initially missed by the endoscopist but identified by the AI system were of small size, isochromatic, flat, and located behind the colon folds, as well as on the edge of the visual field.
If optical colonoscopy is not possible, a colon capsule endoscopy or CT colonography may be performed. In a clinical trial, Deding et al. found that the sensitivity of colon capsule endoscopy (estimated location by AI) following an incomplete optical colonoscopy was superior to CT colonoscopy, and the relative sensitivity of colon capsule endoscopy compared with CT colonography was 2.67 for polyps >5 mm and 1.91 for polyps >9 mm [103].
To reduce unnecessary endoscopic resections and decrease complications and medical costs, it is important to distinguish neoplasms from non-neoplasms. In the first prospective clinical study in the field, Kominami et al. achieved high performance for a real-time diagnosis by an AI-aided colonoscopy (in other words, computer-aided diagnosis (CADx)), with a sensitivity of 93.3% and a specificity of 93.3% [104]. Tamami et al. demonstrated that a computer-aided NBI colonoscopy correctly diagnosed T1b stage cancer with a sensitivity of 83.9% and a specificity of 82.6%, which was better than a normal endoscopy [105]. Mori et al. successfully proved the utility of AI-aided endocytoscopy, which is an ultra-high magnification endoscopy that permits an in vivo assessment of cellular structure, in prospective clinical trials. In their studies, the AI-aided endocytoscopy had a sensitivity of 92% and an accuracy of 89.2%, which was quite similar to expert pathologists [106,107]. In a recent multicenter study, Kudo et al. showed a much better performance (96.9% sensitivity, 100% specificity, and 98% accuracy) of AI endocytoscopy trained using 69,142 endocytoscopic images, taken at 520× magnification, from patients with colorectal polyps who underwent endoscopy at five academic centers [108]. These tremendous efforts by endoscopists and engineers have resulted in a powerful basis for the development of AI-assisted devices, and several AI-aided endoscopic systems have been approved by the FDA and the EU (Table 1). Using AI-aided devices, endoscopists can begin an endoscopic exam immediately by connecting the endoscope to a terminal and monitor equipped with the software. Moreover, a prototype of a novel AI including a colonoscope, which has two lenses, a 160° to 240° angle lateral-backward-view lens and a standard 160°-angle forward-view lens, was published with videos included [109].
Depth prediction for colon cancer is another issue in a colonoscopy diagnosis. Takeda et al. demonstrated that AI endocytoscopy correctly diagnosed invasive colorectal cancer with a sensitivity of 98.1% and a specificity of 100% [110]. Chen et al. used EUS with AI for predicting tumor deposits with a higher AUC than that obtained by magnetic resonance imaging (MRI) [111]. Recently, Kudo et al. established an AI prediction system using patient’s data (age, sex, tumor size, morphology, lymphatic and vascular invasion, and histology), demonstrating that the AI system identified patients with lymph node metastases of T1 colon cancer better than the United States guidelines (AUC 0.83 vs. 0.73) [112]. They mentioned that these prediction models might be used to determine which patients require additional surgery after endoscopic resection of T1 colon cancer.

8. Inflammatory Bowel Disease

The incidence of inflammatory bowel disease (IBD), represented by Crohn’s disease (CD) and ulcerative colitis (UC), is increasing throughout the world, but its pathogenesis remains unclear [113,114,115,116]. Recent studies have indicated that IBD is a multifactorial immune-mediated disease resulting from a complex interplay between host genetic, environmental, and resident microbial factors [115,117,118,119]. To explore the pathogenesis, big data analysis by AI, such as pathological elucidation and biomarker identification, is ongoing and summarized in another review [120]. Using AI data analysis, Waljee et al. predicted remission in patients with moderate-to-severe CD with an AUC of 0.78 at week 8 and an AUC of 0.76 at week 6 [121]. Wang et al. applied AI to predict medication non-adherence in CD patients [122].
An endoscopic assessment of inflammation in IBD may vary among endoscopists depending on their level of experience. Several AI-aided UC scoring algorithms trained by unbiased UC imaging data that were linked to histological data demonstrated excellent performances in distinguishing endoscopic remission (Mayo 0–1) from moderate-to-severe disease (Mayo 2–3) [123,124,125]. Even Mayo 1 level mucosa has very mild inflammation. Ozawa et al. focused on distinguishing Mayo 0 from 0–1 and showed a high level performance of AI-aided diagnosis with an AUC of 0.86 and 0.98 for Mayo 0 and 0–1, respectively [126]. In a more resent prospective study, Takenaka et al. trained an AI algorithm with 40,758 images of colonoscopies and 6885 biopsy results from 2012 UC patients and showed that the system identified endoscopic remission with 90.1% accuracy and histologic remission with 92.9% accuracy [127]. Another approach using endocytoscopy with AI was reported by Maeda et al. [128]. As indicated above, using capsule endoscopy, Kumar et al. reported the first AI-aided diagnostic system for CD lesions with various levels of severity, which resulted in a high sensitivity of over 90% and a high specificity of over 90% [86]. Charisis et al. reported an improved algorithm for capsule endoscopy to detect CD lesions with a sensitivity of 95.2%, a specificity of 92.4%, and an accuracy of 93.8% [129]. In a more recent study, Klang et al. employed a deep learning algorithm with more training images for detecting CD lesions by AI-aided capsule endoscopy and demonstrated excellent performance with an AUC of 0.99 and an accuracy of 95.4–96.7% [130]. CT and MRI images are necessary to determine the disease activity in IBD. Although it is challenging for AI to recognize the intestinal wall structure on CT and MRI, semi-automated AI-aided systems have been reported and summarized previously [131,132].
UC-associated dysplasia and cancer are often difficult to detect. A recent case report suggested the usefulness of AI-based colonoscopy for the detection of dysplasia in patients with longstanding UC [133].

9. Irritable Bowel Syndrome (IBS)

The prevalence of IBS is estimated at 10–20% worldwide [134]. A few AI-related studies for IBS have been published. Most patients with IBS identify certain foods as triggers for their symptom flare-ups. There are two unique smartphone applications for identifying potential trigger foods. Using photos of food from the mobile applications, Chung et al. developed a personal informatics system, which allows patient–provider collaboration and supports precise individual management [135]. Zia et al. designed an application using an AI algorithm based on regression analyses to identify possible relationships between foods and IBS symptoms. Their two-week study featured assessments of symptoms four times a day and at every meal using a 100-point graded sliding scale [136]. These AI-aided mobile applications tether patients directly to clinicians by capturing frequent and continuous data from patients, and providing individual precision feedback from clinicians to patients. This direct interaction is an advantage of AI and will change health care strategies.
In IBS, gut microbiota is likely linked to its symptoms and pathogenesis [137]. Fukui et al. established a unique AI prediction model for identifying IBS patients based on gut microbiota (sensitivity >80% and specificity >90%) [138].

10. Liver Diseases

This section reviews AI-aided image analyses for diagnosing liver masses. In addition, many data analysis studies using AI algorithms have been conducted to predict patients’ outcomes and to discover biomarkers.

10.1. Liver Masses

The risk factors for hepatocellular carcinoma (HCC), such as obesity, type 2 diabetes, and nonalcoholic fatty liver disease, are replacing viral- and alcohol-related liver disease [139]. With an increase in metabolic disorders, liver cancer is steadily growing and is the third leading cause of cancer-related death [43,140,141].
The detection and diagnosis of liver masses is performed by ultrasonography, CT, and MRI, and AI has been developed for hepatic mass identification. Yasaka et al. employed an AI-aided enhanced CT, which resulted in high performance (AUC = 0.92) in differentiating malignant liver masses (HCCs and other malignant masses) from benign tumors (hemangiomas) or cysts [142]. An AI-aided, multi-phasic MRI developed by Hamm et al. demonstrated higher performance than two radiologists for the detection of six common liver masses (HCC, cyst, hemangioma, focal nodular hyperplasia (FNH) intra-hepatic cholangiocarcinoma, and metastatic tumor) with a sensitivity of 90% vs. 80%/85% and a specificity of 98% vs. 96%/96% [143]. In particular for HCC, the AI had a sensitivity of 90%, compared to 60%/70% from the radiologists. Furthermore, the AI processing speed was extremely fast at 5.6 ms. These results are promising, and the FDA recently approved a liver AI for liver lesion detection by AI-aided MRI and CT (Table 1). It is difficult to develop AI-aided ultrasonography because of several technical issues, which include variability in the data formats and investigator skill level, and, as such, the quality of an ultrasonographic image is highly operator dependent. Although the conditions of examination directly affect the quality of ultrasonographic images, several positive results have been reported and summarized [144]. Schmauch et al. showed that AI-aided ultrasonography detected and diagnosed liver masses (HCC, hemangioma, metastasis, cysts, and FNH) with high performances (AUC 0.935 and 0.916, respectively) [37] (Figure 3D). Enhanced ultrasonography [145] for AI-aided EUS also demonstrated the capability of an EUS-CNN model to autonomously identify liver masses and to accurately classify them as either malignant or benign lesions [146]. AI development in the field of ultrasonography has challenges, including a high dependence on operator experience for acquiring quality imaging data, numerous different equipment vendors and models, multiple image quality parameters, and a high diversity of images and hurdles in database construction. In particular, ultrasound waves require high-speed processing. For histopathology, Sun et al. reported the first paper showing a method to classify liver cancer histopathological images using AI [147].
To screen high-risk patients for the development of HCC from patients with cirrhosis, Singal et al. used an AI algorithm and reported good performance [148]. Another important clinical issue for HCC patient management is to identify patients at high risk for post-treatment recurrence. To predict post-operative recurrence, Feng et al. used an AI-aided contrast-enhanced MRI and reported an AUC of 0.83, a sensitivity of 90%, a specificity of 75%, and an accuracy rate of 84% compared to radiologists with an AUC of 0.47–0.57, a sensitivity of 19.3–45.2%, a specificity of 67.3–83.7%, and an accuracy rate of 58.8% [149]. Abajian et al. also showed the utility of AI combined with MRI and patient data [150]. For a similar purpose, Saillard et al. used histopathology images and highlighted the importance of pathologist–AI interactions in the construction of deep-learning algorithms, which benefit from expert knowledge [151]. It was superior to the existing prognostic factors. Factors reflecting a poor prognosis include the presence of vascular space in the tumor and a cord-like shape. AI ultrasonography can also be used for the prediction of response to transcatheter arterial chemoembolization (TACE) and the prediction of post-radiofrequency ablation (RFA) and post-operation survival [152,153].

10.2. Nonalcoholic Fatty Liver Disease (NAFLD)

With the increase in systemic metabolic diseases (obesity, diabetes, hyperlipidemia, etc.), the incidence of NAFLD is also increasing worldwide [154]. Since NAFLD-derived HCC is increasing, the early detection of NAFLD is critical to avoid future carcinogenesis. Recently, deep learning algorithms, such as CNNs, have improved the detection of fatty liver disease by ultrasonography [155,156]. Fibrosis is an advanced stage of liver steatosis and the most important risk factor for carcinogenesis. The gold standard for the diagnosis of fibrosis is a liver biopsy, which is invasive and costly [157,158]. The systematic review by Decharatanachart et al. suggested that AI-aided systems (ultrasonography, elastography, CT, and MRI) have promising potential for the diagnosis of liver steatosis and fibrosis with an overall sensitivity of 97% and a specificity of 91% [159]. Elastography is currently the most commonly used modality for staging liver fibrosis [160], and two papers have demonstrated the utility of AI-aided elastography to detect liver fibrosis [161,162]. Gatos et al. designed an AI-aided shear-wave elastography based on a support vector machine model to discriminate chronic liver disease patients (fibrosis) from healthy individuals with a sensitivity of 93.5%, a specificity of 81.2%, an accuracy of 87.3%, and an AUC of 0.87 [162]. Wang et al. applied deep learning to shear wave elastography and compared the AI elastography to a liver biopsy [161]. The AI elastography similarly diagnosed cirrhosis (AUC 0.97) and advanced fibrosis (AUC 0.98).
Other AI approaches using clinical and laboratory variables routinely measured in clinical practice have been developed [163]. Using serial laboratory data over a person’s timeline, AI analysis can provide a better understanding of a multitude of mechanisms and relationship of risk factors and symptoms. Furthermore, the risk assessment of NAFLD by AI algorithms using serial laboratory variables over a person’s timeline should improve a physician’s management and a patient’s motivation. There are many algorithms challenged in medical AI fields [164], and choosing the best algorithm is an important issue for data analysis by AI. Ma et al. used the Bayesian network model and showed better performance in diagnosing NAFLD based on clinical data than that of logistic regression [165]. Sowa et al. suggested that random forest and decision tree are better than a support vector machine for the separation of NAFLD from alcoholic liver disease [166].

10.3. Viral Hepatitis

Viral hepatitis (B and C) is still recognized as a major cause of liver cirrhosis and carcinogenesis worldwide, particularly in developing countries. Several AI-based models have been developed to predict the risk of hepatitis-related cirrhosis [167,168,169,170,171]. More recently, a unique prediction model using gut microbiome data was published [172]. Oh et al. used a random forest-based AI algorithm with differential abundance analysis to profile the gut microbiota and metabolites and detect cirrhosis with an AUC of 0.91.

10.4. Primary Sclerosing Cholangitis (PSC)

PSC lacks effective medical treatments and occasionally requires a liver transplant due to advanced fibrosis [173]. Moreover, PSC is a premalignant condition and is associated with bile duct cancer at an incidence of 10–30% [173]. Eaton et al. developed an AI-based prediction model, called the Primary Sclerosing Cholangitis Risk Estimate Tool (PREsTo), and demonstrated that the model accurately predicts liver failure in PSC patients, which exceeded the performance of other established, noninvasive prognostic scoring systems [174].

10.5. Liver Transplantation

Liver transplantation offers an excellent outcome for several end-stage liver disorders. However, challenges remain, such as insufficient donors, high mortality on the waiting list, and graft failures. Regarding the discrepancy between the number of donors and the number of recipients, the appropriate organ allocation should be performed to avoid human bias. The current allocations are based on widely used scoring systems, such as the model for end-stage liver disease (MELD) score, the Delta-MELD score, and the balance-of-risk score, and may yield conflicting results [175,176]. Some AI-based, donor–recipient matching models have been developed [177,178]. Graft failure is the most common problem after liver transplantation. AI-based algorithms developed by Lau et al. using donor, transplant and recipient characteristics predicted graft failure with a high AUC of 0.818 [179]. To identify novel factors associated with death after transplantation, AI has been applied [180,181]. Using a machine learning approach, Bhat et al. found that new-onset or preexisting diabetes was associated with high mortality [180].

11. Pancreatic Disease

This section reviews AI-aided image and data analyses for the diagnosis of pancreatic disease.

11.1. Pancreatic Cancer

Pancreatic cancer is the seventh most lethal cancer worldwide [43,182]. Tumor size is the most prognostic factor in pancreatic cancer [183]. The five-year survival of patients with lesions smaller than 10 mm (TS1a) is more than 80%, while the five-year survival of patients with larger lesions (>10 mm) is less than 50% [184]. The challenges for pancreatic ductal cancer include a lack of definition in the high-risk group and difficulty in early detection by imaging. Pereira et al. nicely summarized the literature regarding early detection by AI technology [185]. Although abdominal CT is commonly used for screening pancreatic cancer, the detection sensitivity is not high for small lesions [186,187]. To resolve this issue, Liu et al. first trained an AI algorithm with 436 CT images, including 300 normal cases and 136 pancreatic cancer cases [188]. The AI system achieved a sensitivity of 80.2% with a specificity of 90.2%, which may be improved by a larger number of training images. Alternatively, EUS is a more powerful modality to detect small lesions in the pancreas [187,189]. Tonozuka et al. published a pilot study using video to detect pancreatic ductal cancer by AI-based EUS [190]. The system was trained with 920 images of cancers as well as control images from patients with chronic pancreatitis and those with a normal pancreas and, subsequently, validated with an additional 470 test images. The system diagnosed cancers successfully with an AUC of 0.94. To differentiate between cancer and non-cancer (chronic pancreatitis and a normal pancreas), several algorithms have been applied since the first report using a simple conventional algorithm by Norton et al., and three recent reports have used deep learning algorithms [191].
The identification of high-risk individuals is another important factor for the early detection of pancreatic cancer [185]. Using AI methodologies and the National Health Insurance Research Database of Taiwan (total 1,358,634 patients), Hsieh et al. developed the first prediction models for pancreatic cancer in patients with type 2 diabetes [192]. They demonstrated that a logistic regression algorithm predicted pancreatic cancer more accurately (AUC of 0.727) than an artificial neural network algorithm, although several researchers have reported that artificial neural networks are suitable to predict some diseases [193]. Further investigations are necessary to identify the most suitable model.

11.2. Intraductal Papillary Mucinous Neoplasm (IPMN)

Pancreatic cystic lesions, particularly IPMN, are the precursors of pancreatic cancer [194]. Kuwahara et al. successfully established an AI-aided EUS using deep learning to distinguish malignant IPMNs from benign ones [195]. The AI-aided EUS could diagnose malignant probability with a high sensitivity of 95.7% and a high accuracy of 94.0%, which was much greater than that of experts’ diagnoses (56.0%). AI-aided diagnosis is under development not only for IPMNs but also for other cystic lesions of the pancreas, such as serous cystic neoplasms, mucinous cystic neoplasms, solid pseudopapillary neoplasms, and cystic pancreatic neuroendocrine neoplasms [196].

11.3. Autoimmune Pancreatitis (AIP)

Mass-forming AIP may be misdiagnosed as pancreatic cancer and unnecessary surgical resections can occur. Marya et al. demonstrated that an AI-aided EUS accurately differentiated AIP from pancreatic ductal adenocarcinoma and benign pancreatic conditions, thereby permitting an earlier and more accurate diagnosis [197]. The use of this model offers the potential for more timely and appropriate patient care and an improved outcome.

12. Future Needs and Conclusions

AI technologies in the medical field hold tremendous promise, although systematic reviews have not provided sufficient evidence that AI outperforms physicians [198]. Several AI-aided devices are commercially available (Table 1), and for future use, multiple studies are on-going in promising areas, such as the identification of anatomical structures and lesions during endoscopic ultrasound, robotic endoscopic surgery, and mobile application. However, there are potential pitfalls, including technical and legal issues [199]. To improve the accuracy of AI diagnosis, more data, including imaging and clinical data, are required to train AI systems. The training data should be collected not only from patients with disease but also from healthy individuals, because larger databases will increase the specificity of the AI system. Particularly for rare diseases, international multicenter projects and open-source libraries, such as ImageNet and cloud net systems [135,136,200], are ideal to provide sufficient training data. However, another issue involves ‘data formatting’ such that different institutions/software may have different data formats. Standardization is critical for future AI developments. To resolve these issues, clinicians need to better understand AI technologies through reading AI-related articles and through collaboration with AI engineers. Even with a large amount of training data, the performance of a particular AI system changes with each training step (annotation, selection of algorithm, selection of data set, etc.), and the addition of inappropriate data will adversely affect performance. Moreover, even in situations where sufficient high-quality training data are used, “overfitting” may occur. To design precise AI systems, we must validate the systems in real-world situations [104,201,202].
In conclusion, there is little doubt that AI technology will benefit almost all medical personnel, ranging from specialty physicians to paramedics, in the future [7]. Furthermore, patients should benefit from AI technology directly via mobile applications [135,136]. Physicians should collaborate with the different stakeholders within the AI ecosystem to provide ethical, practical, user-friendly, and cost-effective solutions that reduce the gap between research settings and applications in clinical practice. Collaborations with regulators, patient advocates, AI companies, technology giants, and venture capitalists will help move the field forward.

Author Contributions

Writing—original draft preparation, A.O.; writing—review and editing, N.I. and S.I.; funding acquisition, A.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by MEXT KAKENHI (21K15951) to A.O.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

We would like to thank Keiko Masuzaki for processing grant management.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Benaich, N.; Hogarth, I. The State of AI Report. 2020. Available online: https://www.stateof.ai/ (accessed on 19 September 2021).
  2. Wang, A.; Nguyen, D.; Sridhar, A.R.; Gollakota, S. Using smart speakers to contactlessly monitor heart rhythms. Commun. Biol. 2021, 4, 319. [Google Scholar] [CrossRef]
  3. Ardila, D.; Kiraly, A.P.; Bharadwaj, S.; Choi, B.; Reicher, J.J.; Peng, L.; Tse, D.; Etemadi, M.; Ye, W.; Corrado, G.; et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat. Med. 2019, 25, 954–961. [Google Scholar] [CrossRef]
  4. Haenssle, H.A.; Fink, C.; Schneiderbauer, R.; Toberer, F.; Buhl, T.; Blum, A.; Kalloo, A.; Hassen, A.B.H.; Thomas, L.; Enk, A.; et al. Man against machine: Diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann. Oncol. 2018, 29, 1836–1842. [Google Scholar] [CrossRef] [PubMed]
  5. IEEE. AI vs. Doctors. Available online: https://0-spectrum-ieee-org.brum.beds.ac.uk/static/ai-vs-doctors (accessed on 20 April 2021).
  6. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
  7. Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef] [PubMed]
  8. Stokes, J.M.; Yang, K.; Swanson, K.; Jin, W.; Cubillos-Ruiz, A.; Donghia, N.M.; MacNair, C.R.; French, S.; Carfrae, L.A.; Bloom-Ackermann, Z.; et al. A deep learning approach to antibiotic discovery. Cell 2020, 180, 688–702.e13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Coudray, N.; Ocampo, P.S.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyö, D.; Moreira, A.L.; Razavian, N.; Tsirigos, A. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat. Med. 2018, 24, 1559–1567. [Google Scholar] [CrossRef]
  10. Hamilton, J.G.; Genoff Garzon, M.; Westerman, J.S.; Shuk, E.; Hay, J.L.; Walters, C.; Elkin, E.; Bertelsen, C.; Cho, J.; Daly, B.; et al. “A tool, not a crutch”: Patient perspectives about IBM Watson for oncology trained by memorial sloan kettering. J. Oncol. Pract. 2019, 15, e277–e288. [Google Scholar] [CrossRef]
  11. IBM. IBM Watson Products. Available online: https://www.ibm.com/watson/products-services (accessed on 19 September 2021).
  12. Parasa, S.; Wallace, M.; Bagci, U.; Antonino, M.; Berzin, T.; Byrne, M.; Celik, H.; Farahani, K.; Golding, M.; Gross, S.; et al. Proceedings from the First Global Artificial Intelligence in Gastroenterology and Endoscopy Summit. Gastrointest. Endosc. 2020, 92, 938–945.e1. [Google Scholar] [CrossRef]
  13. Turing, A.M. Computing machinery and intelligence. Mind 1950, 59, 433–460. [Google Scholar] [CrossRef]
  14. McCarthy, J.; Minsky, M.L.; Rochester, N.; Shannon, C.E. A proposal for the Dartmouth summer research project on artificial intelligence. Al Mag. 2006, 27, 12–14. [Google Scholar]
  15. Samuel, A.L. Some studies in machine learning using the game of checkers. IBM J. Res. Dev. 1959, 44, 210–229. [Google Scholar] [CrossRef]
  16. Camacho, D.M.; Collins, K.M.; Powers, R.K.; Costello, J.C.; Collins, J.J. Next-generation machine learning for biological networks. Cell 2018, 173, 1581–1592. [Google Scholar] [CrossRef] [Green Version]
  17. Hinton, G.E. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [Green Version]
  18. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  19. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J.W.L. Artificial intelligence in radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef] [PubMed]
  20. Min, J.K.; Kwak, M.S.; Cha, J.M. Overview of deep learning in gastrointestinal endoscopy. Gut Liver 2019, 13, 388–393. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Hassan, T.M.; Elmogy, M.; Sallam, E.-S. Diagnosis of focal liver diseases based on deep learning technique for ultrasound images. Arab. J. Sci. Eng. 2017, 42, 3127–3140. [Google Scholar] [CrossRef]
  22. Forte, C.; Voinea, A.; Chichirau, M.; Yeshmagambetova, G.; Albrecht, L.M.; Erfurt, C.; Freundt, L.A.; Carmo, L.O.E.; Henning, R.H.; van der Horst, I.C.C.; et al. Deep learning for identification of acute illness and facial cues of illness. Front. Med. 2021, 8, 661309. [Google Scholar] [CrossRef] [PubMed]
  23. Jia, X.; Meng, M.Q.H. A deep convolutional neural network for bleeding detection in Wireless Capsule Endoscopy images. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; Volume 2016, pp. 639–642. [Google Scholar]
  24. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef]
  25. FDA. FDA Permits Marketing of Artificial Intelligence-Based Device to Detect Certain Diabetes-Related Eye Problems. Available online: https://www.fda.gov/news-events/press-announcements/fda-permits-marketing-artificial-intelligence-based-device-detect-certain-diabetes-related-eye (accessed on 19 September 2021).
  26. Liang, H.; Xiang, Y.-Q.; Lv, X.; Xie, C.-Q.; Cao, S.-M.; Wang, L.; Qian, C.-N.; Yang, J.; Ye, Y.-F.; Gan, F.; et al. Survival impact of waiting time for radical radiotherapy in nasopharyngeal carcinoma: A large institution-based cohort study from an endemic area. Eur. J. Cancer 2017, 73, 48–60. [Google Scholar] [CrossRef] [PubMed]
  27. Muto, M.; Minashi, K.; Yano, T.; Saito, Y.; Oda, I.; Nonaka, S.; Omori, T.; Sugiura, H.; Goda, K.; Kaise, M.; et al. Early detection of superficial squamous cell carcinoma in the head and neck region and esophagus by narrow band imaging: A multicenter randomized controlled trial. J. Clin. Oncol. 2010, 28, 1566–1572. [Google Scholar] [CrossRef] [Green Version]
  28. Shimizu, Y.; Tsukagoshi, H.; Fujita, M.; Hosokawa, M.; Watanabe, A.; Kawabori, S.; Kato, M.; Sugiyama, T.; Asaka, M. Head and neck cancer arising after endoscopic mucosal resection for squamous cell carcinoma of the esophagus. Endoscopy 2003, 35, 322–326. [Google Scholar] [CrossRef] [Green Version]
  29. Muto, M.; Satake, H.; Yano, T.; Minashi, K.; Hayashi, R.; Fujii, S.; Ochiai, A.; Ohtsu, A.; Morita, S.; Horimatsu, T.; et al. Long-term outcome of transoral organ-preserving pharyngeal endoscopic resection for superficial pharyngeal cancer. Gastrointest. Endosc. 2011, 74, 477–484. [Google Scholar] [CrossRef] [PubMed]
  30. Mascharak, S.; Baird, B.J.; Holsinger, F.C. Detecting oropharyngeal carcinoma using multispectral, narrow-band imaging and machine learning. Laryngoscope 2018, 128, 2514–2520. [Google Scholar] [CrossRef]
  31. Li, C.; Jing, B.; Ke, L.; Li, B.; Xia, W.; He, C.; Qian, C.; Zhao, C.; Mai, H.; Chen, M.; et al. Development and validation of an endoscopic images-based deep learning model for detection with nasopharyngeal malignancies. Cancer Commun. 2018, 38, 59. [Google Scholar] [CrossRef] [Green Version]
  32. Tamashiro, A.; Yoshio, T.; Ishiyama, A.; Tsuchida, T.; Hijikata, K.; Yoshimizu, S.; Horiuchi, Y.; Hirasawa, T.; Seto, A.; Sasaki, T.; et al. Artificial intelligence-based detection of pharyngeal cancer using convolutional neural networks. Dig. Endosc. 2020, 32, 1057–1065. [Google Scholar] [CrossRef]
  33. Kono, M.; Ishihara, R.; Kato, Y.; Miyake, M.; Shoji, A.; Inoue, T.; Matsueda, K.; Waki, K.; Fukuda, H.; Shimamoto, Y.; et al. Diagnosis of pharyngeal cancer on endoscopic video images by Mask region-based convolutional neural network. Dig. Endosc. 2020. den.13800. [Google Scholar] [CrossRef] [PubMed]
  34. Abe, S.; Oda, I. Real-time pharyngeal cancer detection utilizing artificial intelligence: Journey from the proof of concept to the clinical use. Dig. Endosc. 2021, 33, 552–553. [Google Scholar] [CrossRef]
  35. Hirasawa, T.; Aoyama, K.; Tanimoto, T.; Ishihara, S.; Shichijo, S.; Ozawa, T.; Ohnishi, T.; Fujishiro, M.; Matsuo, K.; Fujisaki, J.; et al. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer 2018, 21, 653–660. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Tsuboi, A.; Oka, S.; Aoyama, K.; Saito, H.; Aoki, T.; Yamada, A.; Matsuda, T.; Fujishiro, M.; Ishihara, S.; Nakahori, M.; et al. Artificial intelligence using a convolutional neural network for automatic detection of small-bowel angioectasia in capsule endoscopy images. Dig. Endosc. 2020, 32, 382–390. [Google Scholar] [CrossRef] [PubMed]
  37. Schmauch, B.; Herent, P.; Jehanno, P.; Dehaene, O.; Saillard, C.; Aubé, C.; Luciani, A.; Lassau, N.; Jégou, S. Diagnosis of focal liver lesions from ultrasound using deep learning. Diagn. Interv. Imaging 2019, 100, 227–233. [Google Scholar] [CrossRef]
  38. Rodríguez-Camacho, E.; Pita-Fernández, S.; Pértega-Díaz, S.; López-Calvintildeo, B.; Seoane-Pillado, T. Clinical-pathological characteristics and prognosis of a cohort of oesophageal cancer patients: A competing risks survival analysis. J. Epidemiol. 2015, 25, 231–238. [Google Scholar] [CrossRef] [Green Version]
  39. Amin, M.B.; Greene, F.L.; Edge, S.B.; Compton, C.C.; Gershenwald, J.E.; Brookland, R.K.; Meyer, L.; Gress, D.M.; Byrd, D.R.; Winchester, D.P. The Eighth Edition AJCC Cancer Staging Manual: Continuing to build a bridge from a population-based to a more “personalized” approach to cancer staging. CA. Cancer J. Clin. 2017, 67, 93–99. [Google Scholar] [CrossRef]
  40. Sano, T.; Coit, D.G.; Kim, H.H.; Roviello, F.; Kassab, P.; Wittekind, C.; Yamamoto, Y.; Ohashi, Y. Proposal of a new stage grouping of gastric cancer for TNM classification: International Gastric Cancer Association staging project. Gastric Cancer 2017, 20, 217–225. [Google Scholar] [CrossRef]
  41. Rice, T.W.; Ishwaran, H.; Hofstetter, W.L.; Kelsen, D.P.; Apperson-Hansen, C.; Blackstone, E.H. Recommendations for pathologic staging (pTNM) of cancer of the esophagus and esophagogastric junction for the 8th edition AJCC/UICC staging manuals. Dis. Esophagus 2016, 29, 897–905. [Google Scholar] [CrossRef] [Green Version]
  42. Hosokawa, O.; Tsuda, S.; Kidani, E.; Watanabe, K.; Tanigawa, Y.; Shirasaki, S.; Hayashi, H.; Hinoshita, T. Diagnosis of gastric cancer up to three years after negative upper gastrointestinal endoscopy. Endoscopy 1998, 30, 669–674. [Google Scholar] [CrossRef]
  43. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA. Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef] [PubMed]
  44. Tokai, Y.; Yoshio, T.; Aoyama, K.; Horie, Y.; Yoshimizu, S.; Horiuchi, Y.; Ishiyama, A.; Tsuchida, T.; Hirasawa, T.; Sakakibara, Y.; et al. Application of artificial intelligence using convolutional neural networks in determining the invasion depth of esophageal squamous cell carcinoma. Esophagus 2020, 17, 250–256. [Google Scholar] [CrossRef]
  45. Horie, Y.; Yoshio, T.; Aoyama, K.; Yoshimizu, S.; Horiuchi, Y.; Ishiyama, A.; Hirasawa, T.; Tsuchida, T.; Ozawa, T.; Ishihara, S.; et al. Diagnostic outcomes of esophageal cancer by artificial intelligence using convolutional neural networks. Gastrointest. Endosc. 2019, 89, 25–32. [Google Scholar] [CrossRef] [PubMed]
  46. Ohmori, M.; Ishihara, R.; Aoyama, K.; Nakagawa, K.; Iwagami, H.; Matsuura, N.; Shichijo, S.; Yamamoto, K.; Nagaike, K.; Nakahara, M.; et al. Endoscopic detection and differentiation of esophageal lesions using a deep neural network. Gastrointest. Endosc. 2020, 91, 301–309.e1. [Google Scholar] [CrossRef] [PubMed]
  47. Nakagawa, K.; Ishihara, R.; Aoyama, K.; Ohmori, M.; Nakahira, H.; Matsuura, N.; Shichijo, S.; Nishida, T.; Yamada, T.; Yamaguchi, S.; et al. Classification for invasion depth of esophageal squamous cell carcinoma using a deep neural network compared with experienced endoscopists. Gastrointest. Endosc. 2019, 90, 407–414. [Google Scholar] [CrossRef] [PubMed]
  48. Luo, H.; Xu, G.; Li, C.; He, L.; Luo, L.; Wang, Z.; Jing, B.; Deng, Y.; Jin, Y.; Li, Y.; et al. Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: A multicentre, case-control, diagnostic study. Lancet Oncol. 2019, 20, 1645–1654. [Google Scholar] [CrossRef]
  49. Ferlay, J.; Colombet, M.; Soerjomataram, I.; Mathers, C.; Parkin, D.M.; Piñeros, M.; Znaor, A.; Bray, F. Estimating the global cancer incidence and mortality in 2018: GLOBOCAN sources and methods. Int. J. Cancer 2019, 144, 1941–1953. [Google Scholar] [CrossRef] [Green Version]
  50. de Groof, J.; van der Sommen, F.; van der Putten, J.; Struyvenberg, M.R.; Zinger, S.; Curvers, W.L.; Pech, O.; Meining, A.; Neuhaus, H.; Bisschops, R.; et al. The Argos project: The development of a computer-aided detection system to improve detection of Barrett’s neoplasia on white light endoscopy. United Eur. Gastroenterol. J. 2019, 7, 538–547. [Google Scholar] [CrossRef] [Green Version]
  51. De Groof, A.J.; Struyvenberg, M.R.; van der Putten, J.; van der Sommen, F.; Fockens, K.N.; Curvers, W.L.; Zinger, S.; Pouw, R.E.; Coron, E.; Baldaque-Silva, F.; et al. Deep-learning system detects neoplasia in patients with barrett’s esophagus with higher accuracy than endoscopists in a multistep training and validation study with benchmarking. Gastroenterology 2020, 158, 915–929.e4. [Google Scholar] [CrossRef]
  52. Hashimoto, R.; Requa, J.; Dao, T.; Ninh, A.; Tran, E.; Mai, D.; Lugo, M.; El-Hage Chehade, N.; Chang, K.J.; Karnes, W.E.; et al. Artificial intelligence using convolutional neural networks for real-time detection of early esophageal neoplasia in Barrett’s esophagus (with video). Gastrointest. Endosc. 2020, 91, 1264–1271.e1. [Google Scholar] [CrossRef]
  53. Arribas, J.; Antonelli, G.; Frazzoni, L.; Fuccio, L.; Ebigbo, A.; van der Sommen, F.; Ghatwary, N.; Palm, C.; Coimbra, M.; Renna, F.; et al. Standalone performance of artificial intelligence for upper GI neoplasia: A meta-analysis. Gut 2021, 70, 1458–1468. [Google Scholar] [CrossRef]
  54. van der Sommen, F.; Zinger, S.; Curvers, W.; Bisschops, R.; Pech, O.; Weusten, B.; Bergman, J.; de With, P.; Schoon, E. Computer-aided detection of early neoplastic lesions in Barrett’s esophagus. Endoscopy 2016, 48, 617–624. [Google Scholar] [CrossRef] [Green Version]
  55. Swager, A.-F.; van der Sommen, F.; Klomp, S.R.; Zinger, S.; Meijer, S.L.; Schoon, E.J.; Bergman, J.J.G.H.M.; de With, P.H.; Curvers, W.L. Computer-aided detection of early Barrett’s neoplasia using volumetric laser endomicroscopy. Gastrointest. Endosc. 2017, 86, 839–846. [Google Scholar] [CrossRef] [Green Version]
  56. Fonollà, R.; Scheeve, T.; Struyvenberg, M.R.; Curvers, W.L.; de Groof, A.J.; van der Sommen, F.; Schoon, E.J.; Bergman, J.J.G.H.M.; de With, P.H.N. Ensemble of deep convolutional neural networks for classification of early Barrett’s neoplasia using volumetric laser endomicroscopy. Appl. Sci. 2019, 9, 2183. [Google Scholar] [CrossRef] [Green Version]
  57. Ebigbo, A.; Mendel, R.; Probst, A.; Manzeneder, J.; de Souza, L.A., Jr.; Papa, J.P.; Palm, C.; Messmann, H. Computer-aided diagnosis using deep learning in the evaluation of early oesophageal adenocarcinoma. Gut 2019, 68, 1143–1145. [Google Scholar] [CrossRef] [Green Version]
  58. Ebigbo, A.; Mendel, R.; Probst, A.; Manzeneder, J.; Prinz, F.; de Souza, L.A., Jr.; Papa, J.; Palm, C.; Messmann, H. Real-time use of artificial intelligence in the evaluation of cancer in Barrett’s oesophagus. Gut 2020, 69, 615–616. [Google Scholar] [CrossRef] [PubMed]
  59. Allum, W.H.; Blazeby, J.M.; Griffin, S.M.; Cunningham, D.; Jankowski, J.A.; Wong, R.; Association of Upper Gastrointestinal Surgeons of Great Britain and Ireland. Guidelines for the management of oesophageal and gastric cancer. Gut 2011, 60, 1449–1472. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Wu, L.; Zhou, W.; Wan, X.; Zhang, J.; Shen, L.; Hu, S.; Ding, Q.; Mu, G.; Yin, A.; Huang, X.; et al. A deep neural network improves endoscopic detection of early gastric cancer without blind spots. Endoscopy 2019, 51, 522–531. [Google Scholar] [CrossRef] [Green Version]
  61. Horiuchi, Y.; Hirasawa, T.; Ishizuka, N.; Tokai, Y.; Namikawa, K.; Yoshimizu, S.; Ishiyama, A.; Yoshio, T.; Tsuchida, T.; Fujisaki, J.; et al. Performance of a computer-aided diagnosis system in diagnosing early gastric cancer using magnifying endoscopy videos with narrow-band imaging (with videos). Gastrointest. Endosc. 2020, 92, 856–865.e1. [Google Scholar] [CrossRef] [PubMed]
  62. Ikenoyama, Y.; Hirasawa, T.; Ishioka, M.; Namikawa, K.; Yoshimizu, S.; Horiuchi, Y.; Ishiyama, A.; Yoshio, T.; Tsuchida, T.; Takeuchi, Y.; et al. Detecting early gastric cancer: Comparison between the diagnostic ability of convolutional neural networks and endoscopists. Dig. Endosc. 2021, 33, 141–150. [Google Scholar] [CrossRef] [PubMed]
  63. Horiuchi, Y.; Tokai, Y.; Yamamoto, N.; Yoshimizu, S.; Ishiyama, A.; Yoshio, T.; Hirasawa, T.; Yamamoto, Y.; Nagahama, M.; Takahashi, H.; et al. Additive effect of magnifying endoscopy with narrow-band imaging for diagnosing mixed-type early gastric cancers. Dig. Dis. Sci. 2020, 65, 591–599. [Google Scholar] [CrossRef]
  64. Horiuchi, Y.; Aoyama, K.; Tokai, Y.; Hirasawa, T.; Yoshimizu, S.; Ishiyama, A.; Yoshio, T.; Tsuchida, T.; Fujisaki, J.; Tada, T. Convolutional neural network for differentiating gastric cancer from gastritis using magnified endoscopy with narrow band imaging. Dig. Dis. Sci. 2020, 65, 1355–1363. [Google Scholar] [CrossRef]
  65. Miyaki, R.; Yoshida, S.; Tanaka, S.; Kominami, Y.; Sanomura, Y.; Matsuo, T.; Oka, S.; Raytchev, B.; Tamaki, T.; Koide, T.; et al. Quantitative identification of mucosal gastric cancer under magnifying endoscopy with flexible spectral imaging color enhancement. J. Gastroenterol. Hepatol. 2013, 28, 841–847. [Google Scholar] [CrossRef]
  66. Kanesaka, T.; Lee, T.-C.; Uedo, N.; Lin, K.-P.; Chen, H.-Z.; Lee, J.-Y.; Wang, H.-P.; Chang, H.-T. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest. Endosc. 2018, 87, 1339–1344. [Google Scholar] [CrossRef]
  67. Kubota, K.; Kuroda, J.; Yoshida, M.; Ohta, K.; Kitajima, M. Medical image analysis: Computer-aided diagnosis of gastric cancer invasion on endoscopic images. Surg. Endosc. 2012, 26, 1485–1489. [Google Scholar] [CrossRef]
  68. Zhu, Y.; Wang, Q.-C.; Xu, M.-D.; Zhang, Z.; Cheng, J.; Zhong, Y.-S.; Zhang, Y.-Q.; Chen, W.-F.; Yao, L.-Q.; Zhou, P.-H.; et al. Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointest. Endosc. 2019, 89, 806–815.e1. [Google Scholar] [CrossRef]
  69. Yoon, H.J.; Kim, S.; Kim, J.-H.; Keum, J.-S.; Oh, S.-I.; Jo, J.; Chun, J.; Youn, Y.H.; Park, H.; Kwon, I.G.; et al. A lesion-based convolutional neural network improves endoscopic detection and depth prediction of early gastric cancer. J. Clin. Med. 2019, 8, 1310. [Google Scholar] [CrossRef] [Green Version]
  70. Uemura, N.; Okamoto, S.; Yamamoto, S.; Matsumura, N.; Yamaguchi, S.; Yamakido, M.; Taniyama, K.; Sasaki, N.; Schlemper, R.J. Helicobacter pylori infection and the development of gastric cancer. N. Engl. J. Med. 2001, 345, 784–789. [Google Scholar] [CrossRef]
  71. Watanabe, K.; Nagata, N.; Shimbo, T.; Nakashima, R.; Furuhata, E.; Sakurai, T.; Akazawa, N.; Yokoi, C.; Kobayakawa, M.; Akiyama, J.; et al. Accuracy of endoscopic diagnosis of Helicobacter pyloriinfection according to level of endoscopic experience and the effect of training. BMC Gastroenterol. 2013, 13, 128. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  72. Shichijo, S.; Nomura, S.; Aoyama, K.; Nishikawa, Y.; Miura, M.; Shinagawa, T.; Takiyama, H.; Tanimoto, T.; Ishihara, S.; Matsuo, K.; et al. Application of convolutional neural networks in the diagnosis of Helicobacter pylori infection based on endoscopic images. EBioMedicine 2017, 25, 106–111. [Google Scholar] [CrossRef] [Green Version]
  73. Shichijo, S.; Endo, Y.; Aoyama, K.; Takeuchi, Y.; Ozawa, T.; Takiyama, H.; Matsuo, K.; Fujishiro, M.; Ishihara, S.; Ishihara, R.; et al. Application of convolutional neural networks for evaluating Helicobacter pylori infection status on the basis of endoscopic images. Scand. J. Gastroenterol. 2019, 54, 158–163. [Google Scholar] [CrossRef] [PubMed]
  74. Nakashima, H.; Kawahira, H.; Kawachi, H.; Sakaki, N. Endoscopic three-categorical diagnosis of Helicobacter pylori infection using linked color imaging and deep learning: A single-center prospective study (with video). Gastric Cancer 2020, 23, 1033–1040. [Google Scholar] [CrossRef]
  75. Nakahira, H.; Ishihara, R.; Aoyama, K.; Kono, M.; Fukuda, H.; Shimamoto, Y.; Nakagawa, K.; Ohmori, M.; Iwatsubo, T.; Iwagami, H.; et al. Stratification of gastric cancer risk using a deep neural network. JGH Open 2020, 4, 466–471. [Google Scholar] [CrossRef]
  76. Shung, D.; Simonov, M.; Gentry, M.; Au, B.; Laine, L. Machine learning to predict outcomes in patients with acute gastrointestinal bleeding: A systematic review. Dig. Dis. Sci. 2019, 64, 2078–2087. [Google Scholar] [CrossRef]
  77. Shung, D.L.; Au, B.; Taylor, R.A.; Tay, J.K.; Laursen, S.B.; Stanley, A.J.; Dalton, H.R.; Ngu, J.; Schultz, M.; Laine, L. Validation of a machine learning model that outperforms clinical risk scoring systems for upper gastrointestinal bleeding. Gastroenterology 2020, 158, 160–167. [Google Scholar] [CrossRef]
  78. Yao, K.; Uedo, N.; Kamada, T.; Hirasawa, T.; Nagahama, T.; Yoshinaga, S.; Oka, M.; Inoue, K.; Mabe, K.; Yao, T.; et al. Guidelines for endoscopic diagnosis of early gastric cancer. Dig. Endosc. 2020, 32, 663–698. [Google Scholar] [CrossRef] [PubMed]
  79. Wu, L.; Zhang, J.; Zhou, W.; An, P.; Shen, L.; Liu, J.; Jiang, X.; Huang, X.; Mu, G.; Wan, X.; et al. Randomised controlled trial of WISENSE, a real-time quality improving system for monitoring blind spots during esophagogastroduodenoscopy. Gut 2019, 68, 2161–2169. [Google Scholar] [CrossRef] [Green Version]
  80. Chen, D.; Wu, L.; Li, Y.; Zhang, J.; Liu, J.; Huang, L.; Jiang, X.; Huang, X.; Mu, G.; Hu, S.; et al. Comparing blind spots of unsedated ultrafine, sedated, and unsedated conventional gastroscopy with and without artificial intelligence: A prospective, single-blind, 3-parallel-group, randomized, single-center trial. Gastrointest. Endosc. 2020, 91, 332–339.e3. [Google Scholar] [CrossRef] [PubMed]
  81. Minoda, Y.; Ihara, E.; Komori, K.; Ogino, H.; Otsuka, Y.; Chinen, T.; Tsuda, Y.; Ando, K.; Yamamoto, H.; Ogawa, Y. Efficacy of endoscopic ultrasound with artificial intelligence for the diagnosis of gastrointestinal stromal tumors. J. Gastroenterol. 2020, 55, 1119–1126. [Google Scholar] [CrossRef]
  82. Inoue, S.; Shichijo, S.; Aoyama, K.; Kono, M.; Fukuda, H.; Shimamoto, Y.; Nakagawa, K.; Ohmori, M.; Iwagami, H.; Matsuno, K.; et al. Application of convolutional neural networks for detection of superficial nonampullary duodenal epithelial tumors in esophagogastroduodenoscopic images. Clin. Transl. Gastroenterol. 2020, 11, e00154. [Google Scholar] [CrossRef] [PubMed]
  83. Triester, S.L.; Leighton, J.A.; Leontiadis, G.I.; Fleischer, D.E.; Hara, A.K.; Heigh, R.I.; Shiff, A.D.; Sharma, V.K. A meta-analysis of the yield of capsule endoscopy compared to other diagnostic modalities in patients with obscure gastrointestinal bleeding. Am. J. Gastroenterol. 2005, 100, 2407–2418. [Google Scholar] [CrossRef] [PubMed]
  84. McAlindon, M.E.; Ching, H.-L.; Yung, D.; Sidhu, R.; Koulaouzidis, A. Capsule endoscopy of the small bowel. Ann. Transl. Med. 2016, 4, 369. [Google Scholar] [CrossRef] [Green Version]
  85. Lewis, B.S. Expanding role of capsule endoscopy in inflammatory bowel disease. World J. Gastroenterol. 2008, 14, 4137. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  86. Kumar, R.; Qian, Z.; Seshamani, S.; Mullin, G.; Hager, G.; Dassopoulos, T. Assessment of Crohn’s disease lesions in wireless capsule endoscopy images. IEEE Trans. Biomed. Eng. 2012, 59, 355–362. [Google Scholar] [CrossRef] [PubMed]
  87. Trasolini, R.; Byrne, M.F. Artificial intelligence and deep learning for small bowel capsule endoscopy. Dig. Endosc. 2021, 33, 290–297. [Google Scholar] [CrossRef]
  88. Boal Carvalho, P.; Magalhães, J.; Dias de Castro, F.; Monteiro, S.; Rosa, B.; Moreira, M.J.; Cotter, J. Suspected blood indicator in capsule endoscopy: A valuable tool for gastrointestinal bleeding diagnosis. Arq. Gastroenterol. 2017, 54, 16–20. [Google Scholar] [CrossRef] [Green Version]
  89. Hassan, A.R.; Haque, M.A. Computer-aided gastrointestinal hemorrhage detection in wireless capsule endoscopy videos. Comput. Methods Programs Biomed. 2015, 122, 341–353. [Google Scholar] [CrossRef]
  90. Aoki, T.; Yamada, A.; Kato, Y.; Saito, H.; Tsuboi, A.; Nakada, A.; Niikura, R.; Fujishiro, M.; Oka, S.; Ishihara, S.; et al. Automatic detection of blood content in capsule endoscopy images based on a deep convolutional neural network. J. Gastroenterol. Hepatol. 2020, 35, 1196–1200. [Google Scholar] [CrossRef]
  91. Aoki, T.; Yamada, A.; Kato, Y.; Saito, H.; Tsuboi, A.; Nakada, A.; Niikura, R.; Fujishiro, M.; Oka, S.; Ishihara, S.; et al. Automatic detection of various abnormalities in capsule endoscopy videos by a deep learning-based system: A multicenter study. Gastrointest. Endosc. 2021, 93, 165–173.e1. [Google Scholar] [CrossRef]
  92. Aoki, T.; Yamada, A.; Aoyama, K.; Saito, H.; Tsuboi, A.; Nakada, A.; Niikura, R.; Fujishiro, M.; Oka, S.; Ishihara, S.; et al. Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest. Endosc. 2019, 89, 357–363.e2. [Google Scholar] [CrossRef]
  93. Aoki, T.; Yamada, A.; Aoyama, K.; Saito, H.; Fujisawa, G.; Odawara, N.; Kondo, R.; Tsuboi, A.; Ishibashi, R.; Nakada, A.; et al. Clinical usefulness of a deep learning-based system as the first screening on small-bowel capsule endoscopy reading. Dig. Endosc. 2020, 32, 585–591. [Google Scholar] [CrossRef] [PubMed]
  94. Zauber, A.G.; Winawer, S.J.; O’Brien, M.J.; Lansdorp-Vogelaar, I.; van Ballegooijen, M.; Hankey, B.F.; Shi, W.; Bond, J.H.; Schapiro, M.; Panish, J.F.; et al. Colonoscopic polypectomy and long-term prevention of colorectal-cancer deaths. N. Engl. J. Med. 2012, 366, 687–696. [Google Scholar] [CrossRef]
  95. Kumar, S.; Thosani, N.; Ladabaum, U.; Friedland, S.; Chen, A.M.; Kochar, R.; Banerjee, S. Adenoma miss rates associated with a 3-minute versus 6-minute colonoscopy withdrawal time: A prospective, randomized trial. Gastrointest. Endosc. 2017, 85, 1273–1280. [Google Scholar] [CrossRef]
  96. Van Rijn, J.C.; Reitsma, J.B.; Stoker, J.; Bossuyt, P.M.; van Deventer, S.J.; Dekker, E. Polyp miss rate determined by tandem colonoscopy: A systematic review. Am. J. Gastroenterol. 2006, 101, 343–350. [Google Scholar] [CrossRef]
  97. Misawa, M.; Kudo, S.; Mori, Y.; Cho, T.; Kataoka, S.; Yamauchi, A.; Ogawa, Y.; Maeda, Y.; Takeda, K.; Ichimasa, K.; et al. Artificial intelligence-assisted polyp detection for colonoscopy: Initial experience. Gastroenterology 2018, 154, 2027–2029.e3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  98. Urban, G.; Tripathi, P.; Alkayali, T.; Mittal, M.; Jalali, F.; Karnes, W.; Baldi, P. Deep learning localizes and identifies polyps in real time with 96% accuracy in screening colonoscopy. Gastroenterology 2018, 155, 1069–1078.e8. [Google Scholar] [CrossRef] [PubMed]
  99. Misawa, M.; Kudo, S.; Mori, Y.; Hotta, K.; Ohtsuka, K.; Matsuda, T.; Saito, S.; Kudo, T.; Baba, T.; Ishida, F.; et al. Development of a computer-aided detection system for colonoscopy and a publicly accessible large colonoscopy video database (with video). Gastrointest. Endosc. 2021, 93, 960–967.e3. [Google Scholar] [CrossRef]
  100. Wang, P.; Berzin, T.M.; Glissen Brown, J.R.; Bharadwaj, S.; Becq, A.; Xiao, X.; Liu, P.; Li, L.; Song, Y.; Zhang, D.; et al. Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: A prospective randomised controlled study. Gut 2019, 68, 1813–1819. [Google Scholar] [CrossRef] [Green Version]
  101. Wang, P.; Liu, X.; Berzin, T.M.; Glissen Brown, J.R.; Liu, P.; Zhou, C.; Lei, L.; Li, L.; Guo, Z.; Lei, S.; et al. Effect of a deep-learning computer-aided detection system on adenoma detection during colonoscopy (CADe-DB trial): A double-blind randomised study. Lancet Gastroenterol. Hepatol. 2020, 5, 343–351. [Google Scholar] [CrossRef]
  102. Wang, P.; Liu, P.; Glissen Brown, J.R.; Berzin, T.M.; Zhou, G.; Lei, S.; Liu, X.; Li, L.; Xiao, X. Lower adenoma miss rate of computer-aided detection-assisted colonoscopy vs. routine white-light colonoscopy in a prospective tandem study. Gastroenterology 2020, 159, 1252–1261.e5. [Google Scholar] [CrossRef] [PubMed]
  103. Deding, U.; Herp, J.; Havshoei, A.; Kobaek-Larsen, M.; Buijs, M.; Nadimi, E.; Baatrup, G. Colon capsule endoscopy versus CT colonography after incomplete colonoscopy. Application of artificial intelligence algorithms to identify complete colonic investigations. United Eur. Gastroenterol. J. 2020, 8, 782–789. [Google Scholar] [CrossRef]
  104. Kominami, Y.; Yoshida, S.; Tanaka, S.; Sanomura, Y.; Hirakawa, T.; Raytchev, B.; Tamaki, T.; Koide, T.; Kaneda, K.; Chayama, K. Computer-aided diagnosis of colorectal polyp histology by using a real-time image recognition system and narrow-band imaging magnifying colonoscopy. Gastrointest. Endosc. 2016, 83, 643–649. [Google Scholar] [CrossRef]
  105. Tamai, N.; Saito, Y.; Sakamoto, T.; Nakajima, T.; Matsuda, T.; Sumiyama, K.; Tajiri, H.; Koyama, R.; Kido, S. Effectiveness of computer-aided diagnosis of colorectal lesions using novel software for magnifying narrow-band imaging: A pilot study. Endosc. Int. Open 2017, 05, E690–E694. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  106. Mori, Y.; Kudo, S.; Wakamura, K.; Misawa, M.; Ogawa, Y.; Kutsukawa, M.; Kudo, T.; Hayashi, T.; Miyachi, H.; Ishida, F.; et al. Novel computer-aided diagnostic system for colorectal lesions by using endocytoscopy (with videos). Gastrointest. Endosc. 2015, 81, 621–629. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  107. Mori, Y.; Kudo, S.; Misawa, M.; Mori, K. Simultaneous detection and characterization of diminutive polyps with the use of artificial intelligence during colonoscopy. VideoGIE 2019, 4, 7–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  108. Kudo, S.; Misawa, M.; Mori, Y.; Hotta, K.; Ohtsuka, K.; Ikematsu, H.; Saito, Y.; Takeda, K.; Nakamura, H.; Ichimasa, K.; et al. Artificial intelligence-assisted system improves endoscopic identification of colorectal neoplasms. Clin. Gastroenterol. Hepatol. 2020, 18, 1874–1881.e2. [Google Scholar] [CrossRef]
  109. Uraoka, T.; Tanaka, S.; Saito, Y.; Matsumoto, T.; Kuribayashi, S.; Hori, K.; Tajiri, H. Computer-assisted detection of diminutive and small colon polyps by colonoscopy using an extra-wide-area-view colonoscope. Endoscopy 2020, 53, E102–E103. [Google Scholar] [CrossRef] [PubMed]
  110. Takeda, K.; Kudo, S.-E.; Mori, Y.; Misawa, M.; Kudo, T.; Wakamura, K.; Katagiri, A.; Baba, T.; Hidaka, E.; Ishida, F.; et al. Accuracy of diagnosing invasive colorectal cancer using computer-aided endocytoscopy. Endoscopy 2017, 49, 798–802. [Google Scholar] [CrossRef] [PubMed]
  111. Chen, L.-D.; Li, W.; Xian, M.-F.; Zheng, X.; Lin, Y.; Liu, B.-X.; Lin, M.-X.; Li, X.; Zheng, Y.-L.; Xie, X.-Y.; et al. Preoperative prediction of tumour deposits in rectal cancer by an artificial neural network–based US radiomics model. Eur. Radiol. 2020, 30, 1969–1979. [Google Scholar] [CrossRef]
  112. Kudo, S.; Ichimasa, K.; Villard, B.; Mori, Y.; Misawa, M.; Saito, S.; Hotta, K.; Saito, Y.; Matsuda, T.; Yamada, K.; et al. Artificial intelligence system to determine risk of T1 colorectal cancer metastasis to lymph node. Gastroenterology 2021, 160, 1075–1084.e2. [Google Scholar] [CrossRef]
  113. Xavier, R.J.; Podolsky, D.K. Unravelling the pathogenesis of inflammatory bowel disease. Nature 2007, 448, 427–434. [Google Scholar] [CrossRef]
  114. Abraham, C.; Cho, J.H. Inflammatory bowel disease. N. Engl. J. Med. 2009, 361, 2066–2078. [Google Scholar] [CrossRef] [PubMed]
  115. Sartor, R.B.; Wu, G.D. Roles for intestinal bacteria, viruses, and fungi in pathogenesis of inflammatory bowel diseases and therapeutic approaches. Gastroenterology 2017, 152, 327–339.e4. [Google Scholar] [CrossRef] [Green Version]
  116. Ananthakrishnan, A.N.; Bernstein, C.N.; Iliopoulos, D.; Macpherson, A.; Neurath, M.F.; Ali, R.A.R.; Vavricka, S.R.; Fiocchi, C. Environmental triggers in IBD: A review of progress and evidence. Nat. Rev. Gastroenterol. Hepatol. 2018, 15, 39–49. [Google Scholar] [CrossRef]
  117. Oka, A.; Sartor, R.B. Microbial-based and microbial-targeted therapies for inflammatory bowel diseases. Dig. Dis. Sci. 2020, 65, 757–788. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  118. Franzosa, E.A.; Sirota-Madi, A.; Avila-Pacheco, J.; Fornelos, N.; Haiser, H.J.; Reinker, S.; Vatanen, T.; Hall, A.B.; Mallick, H.; McIver, L.J.; et al. Gut microbiome structure and metabolic activity in inflammatory bowel disease. Nat. Microbiol. 2019, 4, 293–305. [Google Scholar] [CrossRef] [PubMed]
  119. Graham, D.B.; Xavier, R.J. Pathway paradigms revealed from the genetics of inflammatory bowel disease. Nature 2020, 578, 527–539. [Google Scholar] [CrossRef] [PubMed]
  120. Seyed Tabib, N.S.; Madgwick, M.; Sudhakar, P.; Verstockt, B.; Korcsmaros, T.; Vermeire, S. Big data in IBD: Big progress for clinical practice. Gut 2020, 69, 1520–1532. [Google Scholar] [CrossRef] [PubMed]
  121. Waljee, A.K.; Wallace, B.I.; Cohen-Mekelburg, S.; Liu, Y.; Liu, B.; Sauder, K.; Stidham, R.W.; Zhu, J.; Higgins, P.D.R. Development and validation of machine learning models in prediction of remission in patients with moderate to severe crohn disease. JAMA Netw. Open 2019, 2, e193721. [Google Scholar] [CrossRef]
  122. Wang, L.; Fan, R.; Zhang, C.; Hong, L.; Zhang, T.; Chen, Y.; Liu, K.; Wang, Z.; Zhong, J. Applying machine learning models to predict medication nonadherence in Crohn’s disease maintenance therapy. Patient Prefer. Adherence 2020, 14, 917–926. [Google Scholar] [CrossRef]
  123. Bossuyt, P.; Vermeire, S.; Bisschops, R. Scoring endoscopic disease activity in IBD: Artificial intelligence sees more and better than we do. Gut 2020, 69, 788–789. [Google Scholar] [CrossRef]
  124. Bossuyt, P.; Nakase, H.; Vermeire, S.; de Hertogh, G.; Eelbode, T.; Ferrante, M.; Hasegawa, T.; Willekens, H.; Ikemoto, Y.; Makino, T.; et al. Automatic, computer-aided determination of endoscopic and histological inflammation in patients with mild to moderate ulcerative colitis based on red density. Gut 2020, 69, 1778–1786. [Google Scholar] [CrossRef]
  125. Stidham, R.W.; Liu, W.; Bishu, S.; Rice, M.D.; Higgins, P.D.R.; Zhu, J.; Nallamothu, B.K.; Waljee, A.K. Performance of a deep learning model vs human reviewers in grading endoscopic disease severity of patients with ulcerative colitis. JAMA Netw. Open 2019, 2, e193963. [Google Scholar] [CrossRef] [Green Version]
  126. Ozawa, T.; Ishihara, S.; Fujishiro, M.; Saito, H.; Kumagai, Y.; Shichijo, S.; Aoyama, K.; Tada, T. Novel computer-assisted diagnosis system for endoscopic disease activity in patients with ulcerative colitis. Gastrointest. Endosc. 2019, 89, 416–421.e1. [Google Scholar] [CrossRef] [PubMed]
  127. Takenaka, K.; Ohtsuka, K.; Fujii, T.; Negi, M.; Suzuki, K.; Shimizu, H.; Oshima, S.; Akiyama, S.; Motobayashi, M.; Nagahori, M.; et al. Development and validation of a deep neural network for accurate evaluation of endoscopic images from patients with ulcerative colitis. Gastroenterology 2020, 158, 2150–2157. [Google Scholar] [CrossRef] [PubMed]
  128. Maeda, Y.; Kudo, S.; Mori, Y.; Misawa, M.; Ogata, N.; Sasanuma, S.; Wakamura, K.; Oda, M.; Mori, K.; Ohtsuka, K. Fully automated diagnostic system with artificial intelligence using endocytoscopy to identify the presence of histologic inflammation associated with ulcerative colitis (with video). Gastrointest. Endosc. 2019, 89, 408–415. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  129. Charisis, V.S.; Hadjileontiadis, L.J. Potential of hybrid adaptive filtering in inflammatory lesion detection from capsule endoscopy images. World J. Gastroenterol. 2016, 22, 8641–8657. [Google Scholar] [CrossRef] [PubMed]
  130. Klang, E.; Barash, Y.; Margalit, R.Y.; Soffer, S.; Shimon, O.; Albshesh, A.; Ben-Horin, S.; Amitai, M.M.; Eliakim, R.; Kopylov, U. Deep learning algorithms for automated detection of Crohn’s disease ulcers by video capsule endoscopy. Gastrointest. Endosc. 2020, 91, 606–613.e2. [Google Scholar] [CrossRef]
  131. Tielbeek, J.A.W.; Vos, F.M.; Stoker, J. A computer-assisted model for detection of MRI signs of Crohn’s disease activity: Future or fiction? Abdom. Imaging 2012, 37, 967–973. [Google Scholar] [CrossRef] [Green Version]
  132. Mahapatra, D.; Schüffler, P.J.; Tielbeek, J.A.W.; Vos, F.M.; Buhmann, J.M. Semi-supervised and active learning for automatic segmentation of Crohn’s disease. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Nagoya, Japan, 22–26 September 2013; pp. 214–221. [Google Scholar]
  133. Maeda, Y.; Kudo, S.; Ogata, N.; Misawa, M.; Mori, Y.; Mori, K.; Ohtsuka, K. Can artificial intelligence help to detect dysplasia in patients with ulcerative colitis? Endoscopy 2021, 53, E273–E274. [Google Scholar] [CrossRef]
  134. NICE. Clinical Practice Guideline. Irritable Bowel Syndrome in Adults: Diagnosis and Management of Irritable Bowel Syndrome in Primary Care; National Institute for Health and Care Excellence: London, UK, 2017; p. 554. [Google Scholar]
  135. Chung, C.-F.; Wang, Q.; Schroeder, J.; Cole, A.; Zia, J.; Fogarty, J.; Munson, S.A. Identifying and planning for individualized change. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2019, 3, 1–27. [Google Scholar] [CrossRef] [Green Version]
  136. Zia, J.; Schroeder, J.; Munson, S.; Fogarty, J.; Nguyen, L.; Barney, P.; Heitkemper, M.; Ladabaum, U. Feasibility and usability pilot study of a novel irritable bowel syndrome food and gastrointestinal symptom journal smartphone app. Clin. Transl. Gastroenterol. 2016, 7, e147. [Google Scholar] [CrossRef]
  137. Ishihara, S.; Tada, Y.; Fukuba, N.; Oka, A.; Kusunoki, R.; Mishima, Y.; Oshima, N.; Moriyama, I.; Yuki, T.; Kawashima, K.; et al. Pathogenesis of irritable bowel syndrome—Review regarding associated infection and immune activation. Digestion 2013, 87, 204–211. [Google Scholar] [CrossRef]
  138. Fukui, H.; Nishida, A.; Matsuda, S.; Kira, F.; Watanabe, S.; Kuriyama, M.; Kawakami, K.; Aikawa, Y.; Oda, N.; Arai, K.; et al. Usefulness of machine learning-based gut microbiome analysis for identifying patients with irritable bowels syndrome. J. Clin. Med. 2020, 9, 2403. [Google Scholar] [CrossRef] [PubMed]
  139. Marengo, A.; Rosso, C.; Bugianesi, E. Liver cancer: Connections with obesity, fatty liver, and cirrhosis. Annu. Rev. Med. 2016, 67, 103–117. [Google Scholar] [CrossRef] [PubMed]
  140. Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.L.; Torre, L.A.; Jemal, A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA. Cancer J. Clin. 2018, 68, 394–424. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  141. El-Serag, H.B.; Mason, A.C. Rising incidence of hepatocellular carcinoma in the United States. N. Engl. J. Med. 1999, 340, 745–750. [Google Scholar] [CrossRef]
  142. Yasaka, K.; Akai, H.; Abe, O.; Kiryu, S. Deep learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT: A preliminary study. Radiology 2018, 286, 887–896. [Google Scholar] [CrossRef] [Green Version]
  143. Hamm, C.A.; Wang, C.J.; Savic, L.J.; Ferrante, M.; Schobert, I.; Schlachter, T.; Lin, M.; Duncan, J.S.; Weinreb, J.C.; Chapiro, J.; et al. Deep learning for liver tumor diagnosis part I: Development of a convolutional neural network classifier for multi-phasic MRI. Eur. Radiol. 2019, 29, 3338–3347. [Google Scholar] [CrossRef] [PubMed]
  144. Nishida, N.; Yamakawa, M.; Shiina, T.; Kudo, M. Current status and perspectives for computer-aided ultrasonic diagnosis of liver lesions using deep learning technology. Hepatol. Int. 2019, 13, 416–421. [Google Scholar] [CrossRef] [PubMed]
  145. Guo, L.-H.; Wang, D.; Qian, Y.-Y.; Zheng, X.; Zhao, C.-K.; Li, X.-L.; Bo, X.-W.; Yue, W.-W.; Zhang, Q.; Shi, J.; et al. A two-stage multi-view learning framework based computer-aided diagnosis of liver tumors with contrast enhanced ultrasound images. Clin. Hemorheol. Microcirc. 2018, 69, 343–354. [Google Scholar] [CrossRef]
  146. Marya, N.B.; Powers, P.D.; Fujii-Lau, L.; Abu Dayyeh, B.K.; Gleeson, F.C.; Chen, S.; Long, Z.; Hough, D.M.; Chandrasekhara, V.; Iyer, P.G.; et al. Application of artificial intelligence using a novel EUS-based convolutional neural network model to identify and distinguish benign and malignant hepatic masses. Gastrointest. Endosc. 2020, 93, 1121–1130. [Google Scholar] [CrossRef]
  147. Sun, C.; Xu, A.; Liu, D.; Xiong, Z.; Zhao, F.; Ding, W. Deep learning-based classification of liver cancer histopathology images using only global labels. IEEE J. Biomed. Health Informat. 2020, 24, 1643–1651. [Google Scholar] [CrossRef]
  148. Singal, A.G.; Mukherjee, A.; Elmunzer, J.B.; Higgins, P.D.R.; Lok, A.S.; Zhu, J.; Marrero, J.A.; Waljee, A.K. Machine Learning algorithms outperform conventional regression models in predicting development of hepatocellular carcinoma. Am. J. Gastroenterol. 2013, 108, 1723–1730. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  149. Feng, S.-T.; Jia, Y.; Liao, B.; Huang, B.; Zhou, Q.; Li, X.; Wei, K.; Chen, L.; Li, B.; Wang, W.; et al. Preoperative prediction of microvascular invasion in hepatocellular cancer: A radiomics model using Gd-EOB-DTPA-enhanced MRI. Eur. Radiol. 2019, 29, 4648–4659. [Google Scholar] [CrossRef]
  150. Abajian, A.; Murali, N.; Savic, L.J.; Laage-Gaupp, F.M.; Nezami, N.; Duncan, J.S.; Schlachter, T.; Lin, M.; Geschwind, J.-F.; Chapiro, J. Predicting treatment response to intra-arterial therapies for hepatocellular carcinoma with the use of supervised machine learning—An artificial intelligence concept. J. Vasc. Interv. Radiol. 2018, 29, 850–857.e1. [Google Scholar] [CrossRef] [PubMed]
  151. Saillard, C.; Schmauch, B.; Laifa, O.; Moarii, M.; Toldo, S.; Zaslavskiy, M.; Pronier, E.; Laurent, A.; Amaddeo, G.; Regnault, H.; et al. Predicting survival after hepatocellular carcinoma resection using deep learning on histological slides. Hepatology 2020, 72, 2000–2013. [Google Scholar] [CrossRef]
  152. Liu, D.; Liu, F.; Xie, X.; Su, L.; Liu, M.; Xie, X.; Kuang, M.; Huang, G.; Wang, Y.; Zhou, H.; et al. Accurate prediction of responses to transarterial chemoembolization for patients with hepatocellular carcinoma by using artificial intelligence in contrast-enhanced ultrasound. Eur. Radiol. 2020, 30, 2365–2376. [Google Scholar] [CrossRef]
  153. Liu, F.; Liu, D.; Wang, K.; Xie, X.; Su, L.; Kuang, M.; Huang, G.; Peng, B.; Wang, Y.; Lin, M.; et al. Deep learning radiomics based on contrast-enhanced ultrasound might optimize curative treatments for very-early or early-stage hepatocellular carcinoma patients. Liver Cancer 2020, 9, 397–413. [Google Scholar] [CrossRef]
  154. Younossi, Z.M.; Koenig, A.B.; Abdelatif, D.; Fazel, Y.; Henry, L.; Wymer, M. Global epidemiology of nonalcoholic fatty liver disease-Meta-analytic assessment of prevalence, incidence, and outcomes. Hepatology 2016, 64, 73–84. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  155. Biswas, M.; Kuppili, V.; Edla, D.R.; Suri, H.S.; Saba, L.; Marinhoe, R.T.; Sanches, J.M.; Suri, J.S. Symtosis: A liver ultrasound tissue characterization and risk stratification in optimized deep learning paradigm. Comput. Methods Programs Biomed. 2018, 155, 165–177. [Google Scholar] [CrossRef] [PubMed]
  156. Byra, M.; Styczynski, G.; Szmigielski, C.; Kalinowski, P.; Michałowski, Ł.; Paluszkiewicz, R.; Ziarkiewicz-Wróblewska, B.; Zieniewicz, K.; Sobieraj, P.; Nowicki, A. Transfer learning with deep convolutional neural network for liver steatosis assessment in ultrasound images. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 1895–1903. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  157. Goodman, Z.D. Grading and staging systems for inflammation and fibrosis in chronic liver diseases. J. Hepatol. 2007, 47, 598–607. [Google Scholar] [CrossRef] [PubMed]
  158. Gilmore, I.T.; Burroughs, A.; Murray-Lyon, I.M.; Williams, R.; Jenkins, D.; Hopkins, A. Indications, methods, and outcomes of percutaneous liver biopsy in England and Wales: An audit by the British Society of Gastroenterology and the Royal College of Physicians of London. Gut 1995, 36, 437–441. [Google Scholar] [CrossRef] [Green Version]
  159. Decharatanachart, P.; Chaiteerakij, R.; Tiyarattanachai, T.; Treeprasertsuk, S. Application of artificial intelligence in chronic liver diseases: A systematic review and meta-analysis. BMC Gastroenterol. 2021, 21, 10. [Google Scholar] [CrossRef]
  160. Tsochatzis, E.A.; Gurusamy, K.S.; Ntaoula, S.; Cholongitas, E.; Davidson, B.R.; Burroughs, A.K. Elastography for the diagnosis of severity of fibrosis in chronic liver disease: A meta-analysis of diagnostic accuracy. J. Hepatol. 2011, 54, 650–659. [Google Scholar] [CrossRef]
  161. Wang, K.; Lu, X.; Zhou, H.; Gao, Y.; Zheng, J.; Tong, M.; Wu, C.; Liu, C.; Huang, L.; Jiang, T.; et al. Deep learning Radiomics of shear wave elastography significantly improved diagnostic performance for assessing liver fibrosis in chronic hepatitis B: A prospective multicentre study. Gut 2019, 68, 729–741. [Google Scholar] [CrossRef] [PubMed]
  162. Gatos, I.; Tsantis, S.; Spiliopoulos, S.; Karnabatidis, D.; Theotokas, I.; Zoumpoulis, P.; Loupas, T.; Hazle, J.D.; Kagadis, G.C. A machine-learning algorithm toward color analysis for chronic liver disease classification, employing ultrasound shear wave elastography. Ultrasound Med. Biol. 2017, 43, 1797–1810. [Google Scholar] [CrossRef] [PubMed]
  163. Perveen, S.; Shahbaz, M.; Keshavjee, K.; Guergachi, A. A systematic machine learning based approach for the diagnosis of non-alcoholic fatty liver disease risk and progression. Sci. Rep. 2018, 8, 2112. [Google Scholar] [CrossRef] [Green Version]
  164. Spann, A.; Yasodhara, A.; Kang, J.; Watt, K.; Wang, B.; Goldenberg, A.; Bhat, M. Applying machine learning in liver disease and transplantation: A comprehensive review. Hepatology 2020, 71, 1093–1105. [Google Scholar] [CrossRef] [PubMed]
  165. Ma, H.; Xu, C.; Shen, Z.; Yu, C.; Li, Y. Application of machine learning techniques for clinical predictive modeling: A cross-sectional study on nonalcoholic fatty liver disease in China. BioMed Res. Int. 2018, 2018, 4304376. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  166. Sowa, J.P.; Atmaca, Ö.; Kahraman, A.; Schlattjan, M.; Lindner, M.; Sydor, S.; Scherbaum, N.; Lackner, K.; Gerken, G.; Heider, D.; et al. Non-invasive separation of alcoholic and non-alcoholic liver disease with predictive modeling. PLoS ONE 2014, 9, e101444. [Google Scholar] [CrossRef] [PubMed]
  167. Huang, H.; Shiffman, M.L.; Friedman, S.; Venkatesh, R.; Bzowej, N.; Abar, O.T.; Rowland, C.M.; Catanese, J.J.; Leong, D.U.; Sninsky, J.J.; et al. A 7 gene signature identifies the risk of developing cirrhosis in patients with chronic hepatitis C. Hepatology 2007, 46, 297–306. [Google Scholar] [CrossRef] [PubMed]
  168. Lara, J.; López-Labrador, F.X.; González-Candelas, F.; Berenguer, M.; Khudyakov, Y.E. Computational models of liver fibrosis progression for hepatitis C virus chronic infection. BMC Bioinform. 2014, 15, S5. [Google Scholar] [CrossRef] [Green Version]
  169. Shousha, H.I.; Awad, A.H.; Omran, D.A.; Elnegouly, M.M.; Mabrouk, M. Data mining and machine learning algorithms using IL28B genotype and biochemical markers best predicted advanced liver fibrosis in chronic hepatitis C. Jpn. J. Infect. Dis. 2018, 71, 51–57. [Google Scholar] [CrossRef] [Green Version]
  170. Wei, R.; Wang, J.; Wang, X.; Xie, G.; Wang, Y.; Zhang, H.; Peng, C.-Y.; Rajani, C.; Kwee, S.; Liu, P.; et al. Clinical prediction of HBV and HCV related hepatic fibrosis using machine learning. EBioMedicine 2018, 35, 124–132. [Google Scholar] [CrossRef] [Green Version]
  171. Konerman, M.A.; Lu, D.; Zhang, Y.; Thomson, M.; Zhu, J.; Verma, A.; Liu, B.; Talaat, N.; Balis, U.; Higgins, P.D.R.; et al. Assessing risk of fibrosis progression and liver-related clinical outcomes among patients with both early stage and advanced chronic hepatitis C. PLoS ONE 2017, 12, e0187344. [Google Scholar] [CrossRef] [Green Version]
  172. Oh, T.G.; Kim, S.M.; Caussy, C.; Fu, T.; Guo, J.; Bassirian, S.; Singh, S.; Madamba, E.V.; Bettencourt, R.; Richards, L.; et al. A universal gut-microbiome-derived signature predicts cirrhosis. Cell Metab. 2020, 32, 878–888.e6. [Google Scholar] [CrossRef]
  173. Barnabas, A.; Chapman, R.W. Primary Sclerosing Cholangitis: Is any treatment worthwhile? Curr. Gastroenterol. Rep. 2012, 14, 17–24. [Google Scholar] [CrossRef]
  174. Eaton, J.E.; Vesterhus, M.; McCauley, B.M.; Atkinson, E.J.; Schlicht, E.M.; Juran, B.D.; Gossard, A.A.; LaRusso, N.F.; Gores, G.J.; Karlsen, T.H.; et al. Primary Sclerosing Cholangitis Risk Estimate Tool (PREsTo) predicts outcomes of the disease: A derivation and validation study using machine learning. Hepatology 2020, 71, 214–224. [Google Scholar] [CrossRef] [PubMed]
  175. Halldorson, J.B.; Bakthavatsalam, R.; Fix, O.; Reyes, J.D.; Perkins, J.D. D-MELD, a simple predictor of post liver transplant mortality for optimization of donor/recipient matching. Am. J. Transplant. 2009, 9, 318–326. [Google Scholar] [CrossRef] [PubMed]
  176. Croome, K.P.; Marotta, P.; Wall, W.J.; Dale, C.; Levstik, M.A.; Chandok, N.; Hernandez-Alejandro, R. Should a lower quality organ go to the least sick patient? Model for end-stage liver disease score and donor risk index as predictors of early allograft dysfunction. Transplant. Proc. 2012, 44, 1303–1306. [Google Scholar] [CrossRef] [PubMed]
  177. Briceño, J.; Cruz-Ramírez, M.; Prieto, M.; Navasa, M.; De Urbina, J.O.; Orti, R.; Gómez-Bravo, M.Á.; Otero, A.; Varo, E.; Tomé, S.; et al. Use of artificial intelligence as an innovative donor-recipient matching model for liver transplantation: Results from a multicenter Spanish study. J. Hepatol. 2014, 61, 1020–1028. [Google Scholar] [CrossRef]
  178. Ayllón, M.D.; Ciria, R.; Cruz-Ramírez, M.; Pérez-Ortiz, M.; Gómez, I.; Valente, R.; O’Grady, J.; de la Mata, M.; Hervás-Martínez, C.; Heaton, N.D.; et al. Validation of artificial neural networks as a methodology for donor-recipient matching for liver transplantation. Liver Transpl. 2018, 24, 192–203. [Google Scholar] [CrossRef] [Green Version]
  179. Lau, L.; Kankanige, Y.; Rubinstein, B.; Jones, R.; Christophi, C.; Muralidharan, V.; Bailey, J. Machine-learning algorithms predict graft failure after liver transplantation. Transplantation 2017, 101, e125–e132. [Google Scholar] [CrossRef]
  180. Bhat, V.; Tazari, M.; Watt, K.D.; Bhat, M. New-onset diabetes and preexisting diabetes are associated with comparable reduction in long-term survival after liver transplant: A machine learning approach. Mayo Clin. Proc. 2018, 93, 1794–1802. [Google Scholar] [CrossRef] [PubMed]
  181. Davidson, J.; Wilkinson, A.; Dantal, J.; Dotta, F.; Haller, H.; Hernandez, D.; Kasiske, B.L.; Kiberd, B.; Krentz, A.; Legendre, C.; et al. New-onset diabetes after transplantation: 2003 International Consensus Guidelines. Transplantation 2003, 75, SS3–SS24. [Google Scholar] [CrossRef]
  182. Ushio, J.; Kanno, A.; Ikeda, E.; Ando, K.; Nagai, H.; Miwata, T.; Kawasaki, Y.; Tada, Y.; Yokoyama, K.; Numao, N.; et al. Pancreatic ductal adenocarcinoma: Epidemiology and risk factors. Diagnostics 2021, 11, 562. [Google Scholar] [CrossRef]
  183. Hur, C.; Tramontano, A.C.; Dowling, E.C.; Brooks, G.A.; Jeon, A.; Brugge, W.R.; Gazelle, G.S.; Kong, C.Y.; Pandharipande, P.V. Early pancreatic ductal adenocarcinoma survival is dependent on size. Pancreas 2016, 45, 1062–1066. [Google Scholar] [CrossRef] [PubMed]
  184. Egawa, S.; Toma, H.; Ohigashi, H.; Okusaka, T.; Nakao, A.; Hatori, T.; Maguchi, H.; Yanagisawa, A.; Tanaka, M. Japan pancreatic cancer registry: 30th Year Anniversary. Pancreas 2012, 41, 985–992. [Google Scholar] [CrossRef]
  185. Pereira, S.P.; Oldfield, L.; Ney, A.; Hart, P.A.; Keane, M.G.; Pandol, S.J.; Li, D.; Greenhalf, W.; Jeon, C.Y.; Koay, E.J.; et al. Early detection of pancreatic cancer. Lancet Gastroenterol. Hepatol. 2020, 5, 698–710. [Google Scholar] [CrossRef]
  186. Pelaez-Luna, M.; Takahashi, N.; Fletcher, J.G.; Chari, S.T. Resectability of presymptomatic pancreatic cancer and its relationship to onset of diabetes: A retrospective review of CT scans and fasting glucose values prior to diagnosis. Am. J. Gastroenterol. 2007, 102, 2157–2163. [Google Scholar] [CrossRef]
  187. Canto, M.I.; Hruban, R.H.; Fishman, E.K.; Kamel, I.R.; Schulick, R.; Zhang, Z.; Topazian, M.; Takahashi, N.; Fletcher, J.; Petersen, G.; et al. Frequent detection of pancreatic lesions in asymptomatic high-risk individuals. Gastroenterology 2012, 142, 796–804. [Google Scholar] [CrossRef] [Green Version]
  188. Liu, F.; Xie, L.; Xia, Y.; Fishman, E.; Yuille, A. Joint Shape Representation and Classification for Detecting PDAC. In Lecture Notes in Computer Science; (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); ProQuest LLC: Ann Arbor, MI, USA, 2019; Volume 11861, pp. 212–220. ISBN 9783030326913. [Google Scholar]
  189. Kanno, A.; Masamune, A.; Hanada, K.; Maguchi, H.; Shimizu, Y.; Ueki, T.; Hasebe, O.; Ohtsuka, T.; Nakamura, M.; Takenaka, M.; et al. Multicenter study of early pancreatic cancer in Japan. Pancreatology 2018, 18, 61–67. [Google Scholar] [CrossRef]
  190. Tonozuka, R.; Itoi, T.; Nagata, N.; Kojima, H.; Sofuni, A.; Tsuchiya, T.; Ishii, K.; Tanaka, R.; Nagakawa, Y.; Mukai, S. Deep learning analysis for the detection of pancreatic cancer on endosonographic images: A pilot study. J. Hepatobiliary Pancreat. Sci. 2021, 28, 95–104. [Google Scholar] [CrossRef] [PubMed]
  191. Tonozuka, R.; Mukai, S.; Itoi, T. The role of artificial intelligence in endoscopic ultrasound for pancreatic disorders. Diagnostics 2020, 11, 18. [Google Scholar] [CrossRef]
  192. Hsieh, M.H.; Sun, L.-M.; Lin, C.-L.; Hsieh, M.-J.; Hsu, C.-Y.; Kao, C.-H. Development of a prediction model for pancreatic cancer in patients with type 2 diabetes using logistic regression and artificial neural network models. Cancer Manag. Res. 2018, 10, 6317–6324. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  193. Rau, H.-H.; Hsu, C.-Y.; Lin, Y.-A.; Atique, S.; Fuad, A.; Wei, L.-M.; Hsu, M.-H. Development of a web-based liver cancer prediction model for type II diabetes patients by using an artificial neural network. Comput. Methods Programs Biomed. 2016, 125, 58–65. [Google Scholar] [CrossRef]
  194. Matthaei, H.; Schulick, R.D.; Hruban, R.H.; Maitra, A. Cystic precursors to invasive pancreatic cancer. Nat. Rev. Gastroenterol. Hepatol. 2011, 8, 141–150. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  195. Kuwahara, T.; Hara, K.; Mizuno, N.; Okuno, N.; Matsumoto, S.; Obata, M.; Kurita, Y.; Koda, H.; Toriyama, K.; Onishi, S.; et al. Usefulness of deep learning analysis for the diagnosis of malignancy in intraductal papillary mucinous neoplasms of the pancreas. Clin. Transl. Gastroenterol. 2019, 10, e00045. [Google Scholar] [CrossRef]
  196. Dalal, V.; Carmicheal, J.; Dhaliwal, A.; Jain, M.; Kaur, S.; Batra, S.K. Radiomics in stratification of pancreatic cystic lesions: Machine learning in action. Cancer Lett. 2020, 469, 228–237. [Google Scholar] [CrossRef]
  197. Marya, N.B.; Powers, P.D.; Chari, S.T.; Gleeson, F.C.; Leggett, C.L.; Abu Dayyeh, B.K.; Chandrasekhara, V.; Iyer, P.G.; Majumder, S.; Pearson, R.K.; et al. Utilisation of artificial intelligence for the development of an EUS-convolutional neural network model trained to enhance the diagnosis of autoimmune pancreatitis. Gut 2020, 70, 1335–1344. [Google Scholar] [CrossRef]
  198. Nagendran, M.; Chen, Y.; Lovejoy, C.A.; Gordon, A.C.; Komorowski, M.; Harvey, H.; Topol, E.J.; Ioannidis, J.P.A.; Collins, G.S.; Maruthappu, M. Artificial intelligence versus clinicians: Systematic review of design, reporting standards, and claims of deep learning studies. BMJ 2020, 368, m689. [Google Scholar] [CrossRef] [Green Version]
  199. Ruffle, J.K.; Farmer, A.D.; Aziz, Q. Artificial intelligence-assisted gastroenterology—Promises and pitfalls. Am. J. Gastroenterol. 2019, 114, 422–428. [Google Scholar] [CrossRef] [PubMed]
  200. Epstein, D.; Cordeiro, F.; Bales, E.; Fogarty, J.; Munson, S. Taming data complexity in lifelogs. In Proceedings of the Conference on Designing Interactive Systems, Vancouver, BC, Canada, 21–25 June 2014; ACM: New York, NY, USA, 2014; pp. 667–676. [Google Scholar]
  201. Klare, P.; Sander, C.; Prinzen, M.; Haller, B.; Nowack, S.; Abdelhafez, M.; Poszler, A.; Brown, H.; Wilhelm, D.; Schmid, R.M.; et al. Automated polyp detection in the colorectum: A prospective study (with videos). Gastrointest. Endosc. 2019, 89, 576–582.e1. [Google Scholar] [CrossRef] [PubMed]
  202. Mori, Y.; Kudo, S.; Misawa, M.; Saito, Y.; Ikematsu, H.; Hotta, K.; Ohtsuka, K.; Urushibara, F.; Kataoka, S.; Ogawa, Y.; et al. Real-time use of artificial intelligence in identification of diminutive polyps during colonoscopy. Ann. Intern. Med. 2018, 169, 357. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Summary of AI technologies for gastroenterology, hepatology, and pancreatology. IBS: irritable bowel disease, GI: gastrointestinal, GIST: gastrointestinal stromal tumor, IBD: inflammatory bowel disease, EUS: endoscopic ultrasonography, VCE: video capsule endoscopy, NBI: narrow-band imaging, CT: computed tomography, MRI: magnetic resonance imaging, HCC: hepatocellular carcinoma, FNH: focal nodular hyperplasia, IPMN: intraductal papillary mucinous neoplasm, NAFLD: nonalcoholic fatty liver disease, PSC: primary sclerosing cholangitis, AIP: autoimmune pancreatitis.
Figure 1. Summary of AI technologies for gastroenterology, hepatology, and pancreatology. IBS: irritable bowel disease, GI: gastrointestinal, GIST: gastrointestinal stromal tumor, IBD: inflammatory bowel disease, EUS: endoscopic ultrasonography, VCE: video capsule endoscopy, NBI: narrow-band imaging, CT: computed tomography, MRI: magnetic resonance imaging, HCC: hepatocellular carcinoma, FNH: focal nodular hyperplasia, IPMN: intraductal papillary mucinous neoplasm, NAFLD: nonalcoholic fatty liver disease, PSC: primary sclerosing cholangitis, AIP: autoimmune pancreatitis.
Diagnostics 11 01719 g001
Figure 2. Schematics of neural networks and number of publications of medical AI. (A) A schematic of a traditional neural network algorithm. (B) A schematic of a neural network with deep learning algorithm. (C) The number of publications involving AI in the medical field. The results of a PubMed search using the following key words (“artificial intelligence” OR “machine learning” OR “deep learning” OR “neural network”) AND (medicine OR gastroenterology OR hepatology OR pancreatology OR endoscopy OR radiology OR ultrasonography OR “computed tomography” OR “clinical imaging“) are shown.
Figure 2. Schematics of neural networks and number of publications of medical AI. (A) A schematic of a traditional neural network algorithm. (B) A schematic of a neural network with deep learning algorithm. (C) The number of publications involving AI in the medical field. The results of a PubMed search using the following key words (“artificial intelligence” OR “machine learning” OR “deep learning” OR “neural network”) AND (medicine OR gastroenterology OR hepatology OR pancreatology OR endoscopy OR radiology OR ultrasonography OR “computed tomography” OR “clinical imaging“) are shown.
Diagnostics 11 01719 g002
Figure 3. Representative images of AI-aided endoscopies and ultrasonography. (A) Pharyngeal cancer detected by AI with narrow-band imaging. Adapted from (Tamashiro A, Dig Endosc 2020) [32]. (B) Early gastric cancer detected by an AI system. Adapted from (Hirasawa T, Gastric Cancer 2018) [35]. (C) Small intestinal bleeding detected by an AI system. Adapted from (Tsuboi A, Dig Endosc 2020) [36]. (D) A liver mass detected by an AI system. Adapted from (Schmauch B, Diagn Interv Imaging 2019) [37].
Figure 3. Representative images of AI-aided endoscopies and ultrasonography. (A) Pharyngeal cancer detected by AI with narrow-band imaging. Adapted from (Tamashiro A, Dig Endosc 2020) [32]. (B) Early gastric cancer detected by an AI system. Adapted from (Hirasawa T, Gastric Cancer 2018) [35]. (C) Small intestinal bleeding detected by an AI system. Adapted from (Tsuboi A, Dig Endosc 2020) [36]. (D) A liver mass detected by an AI system. Adapted from (Schmauch B, Diagn Interv Imaging 2019) [37].
Diagnostics 11 01719 g003
Table 1. AI-aided devices approved in the fields of gastroenterology.
Table 1. AI-aided devices approved in the fields of gastroenterology.
ModalityDevice NameInstitutionMemo
EndoscopyEndoBRAIN-EYEOlympusColon tumor detection; made for endocytoscope
EndoBRAINOlympusColon tumor diagnosis; made for endocytoscope
EndoBRAIN-PlusOlympusTumor depth diagnosis; made for endocytoscope
EndoBRAIN-UCOlympusUC activity diagnosis; made for endocytoscope
CAD EYEFujifilmColon polyp detection and diagnosis
WISE VISIONNECColon tumor detection
Connectable to 3 major endoscope manufactures
WavSTAT4PENTAX 1Colorectal cancer diagnosis
GI GeniusMedtronicColorectal cancer diagnosis
DiscoveryPENTAX 1AI-assisted colon polyp detector
CTLiver AIArterysLiver lesion detection
USPoseidon UltrasoundBUTTERFLY NETWORKLiver lesion detection
1 Hoya group. UC: ulcerative colitis, CT: computed tomography, US: ultrasonography.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Oka, A.; Ishimura, N.; Ishihara, S. A New Dawn for the Use of Artificial Intelligence in Gastroenterology, Hepatology and Pancreatology. Diagnostics 2021, 11, 1719. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11091719

AMA Style

Oka A, Ishimura N, Ishihara S. A New Dawn for the Use of Artificial Intelligence in Gastroenterology, Hepatology and Pancreatology. Diagnostics. 2021; 11(9):1719. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11091719

Chicago/Turabian Style

Oka, Akihiko, Norihisa Ishimura, and Shunji Ishihara. 2021. "A New Dawn for the Use of Artificial Intelligence in Gastroenterology, Hepatology and Pancreatology" Diagnostics 11, no. 9: 1719. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics11091719

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop