Next Article in Journal
Cryptococcal Pneumonia: An Unusual Complication in a COVID-19 Patient
Next Article in Special Issue
Ensemble Transfer Learning for Fetal Head Analysis: From Segmentation to Gestational Age and Weight Prediction
Previous Article in Journal
Serum Periostin May Help to Identify Patients with Poor Collaterals in the Hyperacute Phase of Ischemic Stroke
Previous Article in Special Issue
Diagnostic Value of Fully Automated Artificial Intelligence Powered Coronary Artery Calcium Scoring from 18F-FDG PET/CT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Artificial Intelligence to Predict the Need for Tracheostomy in Patients of Deep Neck Infection Based on Clinical and Computed Tomography Findings—Preliminary Data and a Pilot Study

1
Department of Otorhinolaryngology and Head and Neck Surgery, Chang Gung Memorial Hospital, Linkou 333, Taiwan
2
School of Medicine, Chang Gung University, Taoyuan 333, Taiwan
3
Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital, Linkou 333, Taiwan
4
Division of Chinese Internal Medicine, Center for Traditional Chinese Medicine, Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
*
Author to whom correspondence should be addressed.
Submission received: 20 June 2022 / Revised: 8 August 2022 / Accepted: 10 August 2022 / Published: 12 August 2022
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)

Abstract

:
Background: Deep neck infection (DNI) can lead to airway obstruction. Rather than intubation, some patients need tracheostomy to secure the airway. However, no study has used deep learning (DL) artificial intelligence (AI) to predict the need for tracheostomy in DNI patients. Thus, the purpose of this study was to develop a DL framework to predict the need for tracheostomy in DNI patients. Methods: 392 patients with DNI were enrolled in this study between August 2016 and April 2022; 80% of the patients (n = 317) were randomly assigned to a training group for model validation, and the remaining 20% (n = 75) were assigned to the test group to determine model accuracy. The k-nearest neighbor method was applied to analyze the clinical and computed tomography (CT) data of the patients. The predictions of the model with regard to the need for tracheostomy were compared with actual decisions made by clinical experts. Results: No significant differences were observed in clinical or CT parameters between the training group and test groups. The DL model yielded a prediction accuracy of 78.66% (59/75 cases). The sensitivity and specificity values were 62.50% and 80.60%, respectively. Conclusions: We demonstrated a DL framework to predict the need for tracheostomy in DNI patients based on clinical and CT data. The model has potential for clinical application; in particular, it may assist less experienced clinicians to determine whether tracheostomy is necessary in cases of DNI.

1. Introduction

Deep neck infection (DNI) affects the fascial spaces of the neck and can be fatal [1]. DNI may cause airway compromise, which is associated with serious morbidity and even mortality. To manage DNI, protecting the airway is essential [2]. Tracheostomy is considered for DNI patients when intubation is hard to perform. However, whether to perform tracheostomy usually depends on the physician’s clinical consideration.
Artificial intelligence (AI) allows computers to perform tasks that normally require human intellect and cognitive processes [3]. Machine learning is a form of AI that allows predictions to be made based on information extracted from input data [4,5,6]. Multilayered architecture based on mathematical functions allows machines to learn and think more deeply, and to interpret complex data in a highly precise manner. Such machine learning methods are referred to as deep learning (DL). DL AI has made remarkable progress in recent years [7]. However, to date no DL model is available to help physicians determine when to perform tracheostomy in cases of DNI, especially when there is no obvious sign of airway obstruction. Thus, our goal was to establish a DL model for predicting the need for tracheostomy in patients with DNI.

2. Materials and Methods

Between August 2016 and April 2022, this study involved a retrospective review of the medical records of 392 DNI patients admitted to Chang Gung Memorial Hospital in Linkou, Taiwan. Computed tomography (CT) was performed for diagnostic imaging. When the DNI cause the airway obstruction, progression of symptom was observed in the DNI after 2 days of intravenous antibiotics using, or ≥2 cm abscess was detected, the incision and drainage was performed.
According to patient’s vital signs, blood oxygen saturation, respiratory situation, laboratory and imaging findings, the treating physician decided whether each patient should undergo tracheostomy to secure the airway [8].
Ceftriaxone (1 g, q12 h) and metronidazole (500 mg, q8 h) were the empiric antibiotics [9]. The antibiotic regime can be adjusted depending on the pathogen culture. If no clear microorganisms are recognized, patients are treated with intravenous antibiotics for 7–10 days, followed by 7 days of oral amoxicillin trihydrate + clavulanate potassium or clindamycin [10].

2.1. Measurement of CT

We measured the maximum diameter of the abscess in an axial, coronal, or sagittal CT scan. Next, we measured the nearest distance from abscess to the inlet of the trachea on the axial scan; both measurements were used as DL parameters (Figure 1A–D).

2.2. Data Collection

To establish the DL model for predicting the need for tracheostomy, we collected the following clinical data based on medical records as Table 1 shown. Together with the maximum diameter of the abscess and the nearest distance from the abscess to the inlet of trachea, these clinical variables were entered into the DL model. The values for all continuous and categorical variables were standardized, i.e., were converted into z-scores. We subtracted the mean score for a given variable from all individual scores and then divided the remainder by the standard deviation [11].

2.3. k-Nearest Neighbor Method

To develop a DL model, the dataset of interest is first separated into training and test subsets [4,6]. The model can then be validated using the test dataset; this allows for the accurate prediction of model performance when analyzing previously unseen data [3].
In this study, 80% of the data (n = 317) were randomly selected for model training; the remaining 20% (n = 75) were used for testing the model (Figure 2). Several mathematical algorithms may be used for DL models; the k-nearest neighbor (k-NN) method was used for this DL model. The k-NN algorithm is used to classify hitherto unclassified data, based on the classification of the nearest neighbors among a set of previously classified instances [12,13,14,15,16]. In other words, the k-NN algorithm measures the distance or similarity between test and training instances [17,18,19], and classifies each training set instance based on its similarity to its neighbors. The final classifications and output depend on the distances between the test and training data (Figure 3) [5,6,11,14,20,21].
When using the k-NN algorithm, Euclidean distance D is obtained to represent the distance between two points, x and y, in n-dimensional space, with each n-dimension corresponding to one of the n-features needed to characterize an instance [11,19,22,23]. The following formula is used:
D ( x ,   y ) = ( x 1 y 1 ) 2 + ( x 2 y 2 ) 2 + ( x n y n ) 2
The k value used should be that resulting in the highest classification accuracy [19]. In this study, k = 1 was chosen because this value provided the optimal classification performance after cross-validation, as the previous study [21].
After verifying our model, we used it to predict the need for tracheostomy in DNI patients. The model parameters were optimized through an iterative process that progressively reduced the discrepancy between the actual and expected model outputs [6].

2.4. Exclusion Criteria

Patients with immunocompromised status, serious cardiopulmonary illness, or history of head and neck trauma were excluded. In total, 392 patients were enrolled.

2.5. Statistical Analysis

The Kolmogorov–Smirnov test revealed that the data were not normally distributed, so we used the chi-square and Mann–Whitney U tests to analyze categorical and continuous variables, respectively. Classification accuracy (tracheostomy vs. non-tracheostomy) was calculated as the ratio between the number of correctly classified patients and the total number of patients [11]. Sensitivity (true-positive rate) refers to the proportion of correctly identified positive (tracheostomy) patients, while specificity (true-negative rate) is the proportion of correctly identified negative (non-tracheostomy) patients. All data were analyzed using MedCalc software (ver. 18.6; MedCalc, Ostend, Belgium) and Excel (Microsoft Corp., Redmond, WA, USA) [7,24]. A p value < 0.05 was considered to reflect statistical significance.

3. Results

Table 1 lists demographic and clinical data. In total, 392 patients with DNI were enrolled: 261 males (66.58%) and 131 females (33.42%) with a mean age of 51.36 ± 18.74 years. The mean chief complaint period was 5.04 ± 4.49 days. With regard to laboratory data, the mean WBC count was 15,007.39 ± 5801.19 μL, the mean CRP level was 156.94 ± 99.61 mg/L, and the mean blood sugar level was 142.66 ± 72.46 mg/dL. Furthermore, 147 (37.50%) patients had DM status.
Involvement of single deep neck space was observed in 108 (27.55%) patients, while double spaces were involved in 151 (38.52%) patients, and three or more spaces were involved in 133 (33.93%) patients. Mediastinitis was observed in 20 (5.10%) patients. On CT images, the mean maximum diameter of abscess was 6.36 ± 3.08 cm, and the mean nearest distance from abscess to inlet of trachea was 1.41 ± 1.35 cm. A tracheostomy was performed in 50 (12.75%) patients.
Table 2 compares the 317 patients in the training group with the 75 patients in the test group. No significant differences were observed between the two groups in terms of clinical variables or CT scan parameters.
Based on the parameters which we chose, our DL model yielded a patient classification accuracy of 78.66% (59/75). The analysis revealed that the sensitivity and specificity values were 62.50% and 80.60%, respectively.

4. Discussion

Complications of DNI can include esophageal perforation, pneumonia, internal jugular vein thrombosis (Lemierre’s Syndrome), carotid artery erosion, and airway compromise [25,26,27]. The mortality rate is relatively high whiles these complications occur [28]. A tracheostomy is needed in some DNI cases to secure the airway.
DL models are used for making predictions based on previous observations [6,29]. Several DL algorithms are available to analyze large datasets; through such analyses, complex and heterogeneous data can inform real-world clinical practice and recommendations [30,31,32,33,34]. The medical applications of DL include cancer diagnosis, prognostic predictions, integration of clinical and genomic data, clinical trial design, and analysis of readmission and mortality data [35,36,37,38,39]. With regard to infectious diseases, DL has been used to aid diagnosis, predict severity, and determine the most appropriate antimicrobial treatment for individual patients [40]. Wilson et al. used DL to diagnose peritonsillar abscess with high accuracy [4]. Our DL model was able to predict whether tracheostomy would be needed for DNI patients based on their clinical and CT data; the results suggest that it could be used in clinical practice.
The k-NN algorithm is one of the oldest, simplest, and most accurate DL algorithms for data mining and pattern classification, and is widely applied in many fields [17,21,41,42,43]. The k-NN algorithm operates on the assumption that instances in a dataset are often in close proximity to other instances with similar characteristics; classification is based on the similarity of instances with their nearest neighbors. The relative distance between instances is more important than their absolute position within a given region [19]. The k-NN algorithm is suitable for analyzing large, multidimensional datasets [41,44], and is the optimal method when prior knowledge of the data distribution is lacking [17,45]. Furthermore, there is no requirement for off-line training when using the k-NN algorithm, so it is also time efficient [14]. It already plays an important role in the fields of transportation, information security, and medicine [21].
As a user-defined integer, the value of k is typically small. If k = 1, the algorithm considers the nearest neighbor to be an unclassified instance. If k = 3, k-NN compares the distance to the unclassified instance among its three nearest neighbors [11]. When small k values are used, approximation error decreases while estimation error increases; the opposite trends are seen when k takes a large value. In practical applications, k generally takes a relatively small value, and cross-validation is usually used to determine the most appropriate value [21]. The 1-NN classifier is usually used as a benchmark for other classifiers because it exhibits reasonable performance for many pattern classification problems [14].
In this research, most patients were males, and this preponderance has been detected in former reports [9,46]. The average age of our patients was middle age, which was consistent with the prior researches [47,48]. Only significant factors can be used for classification [23], and research is ongoing to determine how to identify the most important variables and features for learning algorithms [49,50,51]. In this study, factors were selected for the DL model based on the ease of implementation and interpretation, with the goal of providing clinicians with insight into the circumstances under which tracheostomy should be performed. We considered the maximum diameter of the abscess, and its distance from the upper airway inlet on CT scans, to be the most influential parameters with regard to the decision to perform tracheostomy. Therefore, we included these two CT parameters in the training model.
As shown in Table 2, no significant differences were observed in clinical variables or CT parameters between the training and test groups. As with other DL models, we input retrospective data, such that the model was based on the past decisions of clinicians. Our DL model yielded a prediction accuracy of 78.66%. Failure to achieve a better accuracy may have been related to the variables used in the model, and to the subjective nature of clinicians’ decisions to perform tracheostomy. We did not consider the reason why DL is necessary because of the increasing errors of physicians’ clinical judgment. Conversely, this DL model can help clinicians determine whether patients should undergo tracheostomy at the beginning of the treatment course; this could be especially valuable for physicians who are less experienced in making decisions about whether to perform tracheostomy. Well-designed models with acceptable prediction accuracy based on training data can be tuned to handle new data inputs [6].

Study Limitations

Limitations of this study included the use of retrospective data, reliance on patient self-reports for medical history data, subjective judgment, and decision making for tracheostomy, and manual measurement of CT scans. Thus, the disparities or inconsistencies could occur due to these biases. This pilot study is preliminary research, which has several deficits to address. Furthermore, the dataset was also relatively small (n = 317 in training group; n = 75 in test group) and based on a single institution.

5. Conclusions

We demonstrated a DL model to predict the need for tracheostomy based on patients’ clinical and CT data. It can help clinicians to decide whether tracheostomy should be performed in cases of DNI, and may lead to improvements in critical care.

Author Contributions

Conceptualization, C.-Y.H. and S.-L.C., methodology, S.-L.C., validation, C.-Y.H. and S.-L.C., data curation, S.-C.C. and S.-L.C., writing—original draft preparation, S.-L.C., writing—review and editing, C.-Y.H. and S.-L.C., visualization, C.-Y.H. and S.-L.C., supervision, S.-L.C., project administration, C.-Y.H. and S.-L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was approved by the Institutional Review Board (IRB) of Chang Gung Medical Foundation (IRB approval no. 202200742B0).

Informed Consent Statement

The requirement for informed consent was waived because the data were collected retrospectively and anonymized before analyses.

Data Availability Statement

All data generated or analyzed in the study are included in this published article. The data are available on request.

Acknowledgments

The authors thank all of the members of the Department of Otorhinolaryngology–Head and Neck Surgery, Chang Gung Memorial Hospital, Linkou, for their invaluable help.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AI: artificial intelligence; CT, computed tomography; CRP, C-reactive protein; DM, diabetes mellitus; DNI, deep neck infection; DL, deep learning; WBC, white blood cell.

References

  1. Velhonoja, J.; Laaveri, M.; Soukka, T.; Irjala, H.; Kinnunen, I. Deep neck space infections: An upward trend and changing characteristics. Eur. Arch. Otorhinolaryngol. 2020, 277, 863–872. [Google Scholar] [CrossRef]
  2. Tapiovaara, L.; Back, L.; Aro, K. Comparison of intubation and tracheotomy in patients with deep neck infection. Eur. Arch. Otorhinolaryngol. 2017, 274, 3767–3772. [Google Scholar] [CrossRef]
  3. Bur, A.M.; Shew, M.; New, J. Artificial Intelligence for the Otolaryngologist: A State of the Art Review. Otolaryngol. Head Neck Surg. 2019, 160, 603–611. [Google Scholar] [CrossRef]
  4. Wilson, M.B.; Ali, S.A.; Kovatch, K.J.; Smith, J.D.; Hoff, P.T. Machine Learning Diagnosis of Peritonsillar Abscess. Otolaryngol. Head Neck Surg. 2019, 161, 796–799. [Google Scholar] [CrossRef]
  5. Laios, A.; Gryparis, A.; DeJong, D.; Hutson, R.; Theophilou, G.; Leach, C. Predicting complete cytoreduction for advanced ovarian cancer patients using nearest-neighbor models. J. Ovarian Res. 2020, 13, 117. [Google Scholar] [CrossRef]
  6. Crowson, M.G.; Ranisau, J.; Eskander, A.; Babier, A.; Xu, B.; Kahmke, R.R.; Chen, J.M.; Chan, T.C.Y. A contemporary review of machine learning in otolaryngology-head and neck surgery. Laryngoscope 2020, 130, 45–51. [Google Scholar] [CrossRef] [PubMed]
  7. Wang, Y.M.; Li, Y.; Cheng, Y.S.; He, Z.Y.; Yang, J.M.; Xu, J.H.; Chi, Z.C.; Chi, F.L.; Ren, D.D. Deep Learning in Automated Region Proposal and Diagnosis of Chronic Otitis Media Based on Computed Tomography. Ear Hear. 2020, 41, 669–677. [Google Scholar] [CrossRef] [PubMed]
  8. Chen, S.L.; Young, C.K.; Tsai, T.Y.; Chien, H.T.; Kang, C.J.; Liao, C.T.; Huang, S.F. Factors Affecting the Necessity of Tracheostomy in Patients with Deep Neck Infection. Diagnostics 2021, 11, 1536. [Google Scholar] [CrossRef]
  9. Yang, S.W.; Lee, M.H.; See, L.C.; Huang, S.H.; Chen, T.M.; Chen, T.A. Deep neck abscess: An analysis of microbial etiology and the effectiveness of antibiotics. Infect. Drug Resist. 2008, 1, 1–8. [Google Scholar] [CrossRef]
  10. Chen, S.L.; Young, C.K.; Liao, C.T.; Tsai, T.Y.; Kang, C.J.; Huang, S.F. Parotid Space, a Different Space from Other Deep Neck Infection Spaces. Microorganisms 2021, 9, 2361. [Google Scholar] [CrossRef]
  11. Garcia-Carretero, R.; Vigil-Medina, L.; Mora-Jimenez, I.; Soguero-Ruiz, C.; Barquero-Perez, O.; Ramos-Lopez, J. Use of a K-nearest neighbors model to predict the development of type 2 diabetes within 2 years in an obese, hypertensive population. Med. Biol. Eng. Comput. 2020, 58, 991–1002. [Google Scholar] [CrossRef] [PubMed]
  12. Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  13. Luz, C.F.; Vollmer, M.; Decruyenaere, J.; Nijsten, M.W.; Glasner, C.; Sinha, B. Machine learning in infection management using routine electronic health records: Tools, techniques, and reporting of future technologies. Clin. Microbiol. Infect. 2020, 26, 1291–1299. [Google Scholar] [CrossRef] [PubMed]
  14. Hu, L.Y.; Huang, M.W.; Ke, S.W.; Tsai, C.F. The distance function effect on k-nearest neighbor classification for medical datasets. Springerplus 2016, 5, 1304. [Google Scholar] [CrossRef] [PubMed]
  15. Rajaguru, H.; Sr, R.S. Analysis of Decision Tree and K-Nearest Neighbor Algorithm in the Classification of Breast Cancer. Asian Pac. J. Cancer Prev. 2019, 20, 3777–3781. [Google Scholar] [CrossRef]
  16. Zhang, Z. Introduction to machine learning: K-nearest neighbors. Ann. Transl. Med. 2016, 4, 218. [Google Scholar] [CrossRef] [PubMed]
  17. Abu Alfeilat, H.A.; Hassanat, A.B.A.; Lasassmeh, O.; Tarawneh, A.S.; Alhasanat, M.B.; Eyal Salman, H.S.; Prasath, V.B.S. Effects of Distance Measure Choice on K-Nearest Neighbor Classifier Performance: A Review. Big Data 2019, 7, 221–248. [Google Scholar] [CrossRef] [PubMed]
  18. Campillo-Gimenez, B.; Bayat, S.; Cuggia, M. Coupling K-nearest neighbors with logistic regression in case-based reasoning. Stud. Health Technol. Inform. 2012, 180, 275–279. [Google Scholar]
  19. Singh, H.; Sharma, V.; Singh, D. Comparative analysis of proficiencies of various textures and geometric features in breast mass classification using k-nearest neighbor. Vis. Comput. Ind. Biomed. Art 2022, 5, 3. [Google Scholar] [CrossRef] [PubMed]
  20. Short, R.; Fukunaga, K. The optimal distance measure for nearest neighbor classification. IEEE Trans. Inf. Theory 1981, 27, 622–627. [Google Scholar] [CrossRef]
  21. Chen, L.; Wang, C.; Chen, J.; Xiang, Z.; Hu, X. Voice Disorder Identification by using Hilbert-Huang Transform (HHT) and K Nearest Neighbor (KNN). J. Voice 2021, 35, 932.e1–932.e11. [Google Scholar] [CrossRef] [PubMed]
  22. Chen, C.H.; Huang, W.T.; Tan, T.H.; Chang, C.C.; Chang, Y.J. Using K-Nearest Neighbor Classification to Diagnose Abnormal Lung Sounds. Sensors 2015, 15, 13132–13158. [Google Scholar] [CrossRef] [PubMed]
  23. Hatem, M.Q. Skin lesion classification system using a K-nearest neighbor algorithm. Vis. Comput. Ind. Biomed. Art 2022, 5, 7. [Google Scholar] [CrossRef] [PubMed]
  24. Enriko, I.K.A.; Suryanegara, M.; Gunawan, D. Heart disease prediction system using k-Nearest neighbor algorithm with simplified patient’s health parameters. J. Telecommun. Electron. Comput. Electron. Comput. Eng. 2016, 8, 59–65. Available online: https://scholar.ui.ac.id/en/publications/heart-disease-prediction-system-using-k-nearest-neighbor-algorith (accessed on 31 May 2022).
  25. Brito, T.P.; Guimaraes, A.C.; Oshima, M.M.; Chone, C.T. Mediastinitis: Parotid abscess complication. Braz. J. Otorhinolaryngol. 2014, 80, 268–269. [Google Scholar] [CrossRef]
  26. Ho, C.Y.; Wang, Y.C.; Chin, S.C.; Chen, S.L. Factors Creating a Need for Repeated Drainage of Deep Neck Infections. Diagnostics 2022, 12, 940. [Google Scholar] [CrossRef]
  27. Chen, S.L.; Ho, C.Y.; Chin, S.C.; Wang, Y.C. Factors affecting perforation of the esophagus in patients with deep neck infection. BMC Infect. Dis. 2022, 22, 501. [Google Scholar] [CrossRef]
  28. Wang, L.F.; Kuo, W.R.; Tsai, S.M.; Huang, K.J. Characterizations of life-threatening deep cervical space infections: A review of one hundred ninety-six cases. Am. J. Otolaryngol. 2003, 24, 111–117. [Google Scholar] [CrossRef] [PubMed]
  29. Ferreira, I.G.; Weber, M.B.; Bonamigo, R.R. History of dermatology: The study of skin diseases over the centuries. An. Bras. Dermatol. 2021, 96, 332–345. [Google Scholar] [CrossRef] [PubMed]
  30. Lotsch, J.; Sipila, R.; Tasmuth, T.; Kringel, D.; Estlander, A.M.; Meretoja, T.; Kalso, E.; Ultsch, A. Machine-learning-derived classifier predicts absence of persistent pain after breast cancer surgery with high accuracy. Breast Cancer Res. Treat. 2018, 171, 399–411. [Google Scholar] [CrossRef]
  31. Kleiman, R.S.; LaRose, E.R.; Badger, J.C.; Page, D.; Caldwell, M.D.; Clay, J.A.; Peissig, P.L. Using Machine Learning Algorithms to Predict Risk for Development of Calciphylaxis in Patients with Chronic Kidney Disease. AMIA Jt. Summits Transl. Sci. Proc. 2018, 2017, 139–146. [Google Scholar] [PubMed]
  32. Hsieh, C.H.; Lu, R.H.; Lee, N.H.; Chiu, W.T.; Hsu, M.H.; Li, Y.C. Novel solutions for an old disease: Diagnosis of acute appendicitis with random forest, support vector machines, and artificial neural networks. Surgery 2011, 149, 87–93. [Google Scholar] [CrossRef]
  33. Chan, S.; Reddy, V.; Myers, B.; Thibodeaux, Q.; Brownstone, N.; Liao, W. Machine Learning in Dermatology: Current Applications, Opportunities, and Limitations. Dermatol. Ther. 2020, 10, 365–386. [Google Scholar] [CrossRef]
  34. Howard, F.M.; Kochanny, S.; Koshy, M.; Spiotto, M.; Pearson, A.T. Machine Learning-Guided Adjuvant Treatment of Head and Neck Cancer. JAMA Netw. Open 2020, 3, e2025881. [Google Scholar] [CrossRef]
  35. Angus, D.C. Fusing Randomized Trials with Big Data: The Key to Self-learning Health Care Systems? JAMA 2015, 314, 767–768. [Google Scholar] [CrossRef]
  36. Cruz, J.A.; Wishart, D.S. Applications of machine learning in cancer prediction and prognosis. Cancer Inform. 2007, 2, 59–77. [Google Scholar] [CrossRef] [PubMed]
  37. Tan, A.C.; Gilbert, D. Ensemble machine learning on gene expression data for cancer classification. Appl. Bioinform. 2003, 2, S75–S83. [Google Scholar]
  38. Rajkomar, A.; Oren, E.; Chen, K.; Dai, A.M.; Hajaj, N.; Hardt, M.; Liu, P.J.; Liu, X.; Marcus, J.; Sun, M.; et al. Scalable and accurate deep learning with electronic health records. NPJ Digit. Med. 2018, 1, 18. [Google Scholar] [CrossRef]
  39. Elfiky, A.A.; Pany, M.J.; Parikh, R.B.; Obermeyer, Z. Development and Application of a Machine Learning Approach to Assess Short-term Mortality Risk Among Patients With Cancer Starting Chemotherapy. JAMA Netw. Open 2018, 1, e180926. [Google Scholar] [CrossRef]
  40. Peiffer-Smadja, N.; Rawson, T.M.; Ahmad, R.; Buchard, A.; Georgiou, P.; Lescure, F.X.; Birgand, G.; Holmes, A.H. Machine learning for clinical decision support in infectious diseases: A narrative review of current applications. Clin. Microbiol. Infect. 2020, 26, 584–595. [Google Scholar] [CrossRef]
  41. Yu, Z.; Chen, H.; Liuxs, J.; You, J.; Leung, H.; Han, G. Hybrid k-Nearest Neighbor Classifier. IEEE Trans. Cybern. 2016, 46, 1263–1275. [Google Scholar] [CrossRef]
  42. Bhatia, N.; Vandana. Survey of Nearest Neighbor Techniques. Int. J. Comput. Sci. Inf. Secur. 2010, 8, 302–305. [Google Scholar] [CrossRef]
  43. Wu, X.; Kumar, V.; Quinlan, J.R.; Ghosh, J.; Yang, Q.; Motoda, H.; McLachlan, G.J.; Ng, A.; Liu, B.; Yu, P.S.; et al. Top 10 algorithms in data mining. Knowl. Inf. Syst. Vol. 2007, 14, 1–37. [Google Scholar] [CrossRef]
  44. Przybyla-Kasperek, M.; Marfo, K.F. Neural Network Used for the Fusion of Predictions Obtained by the K-Nearest Neighbors Algorithm Based on Independent Data Sources. Entropy 2021, 23, 1568. [Google Scholar] [CrossRef] [PubMed]
  45. Przybyła-Kasperek, M. Three Conflict Methods in Multiple Classifiers that Use Dispersed Knowledge. Int. J. Inf. Technol. Decis. Mak. 2019, 18, 555–599. [Google Scholar] [CrossRef]
  46. Ho, C.Y.; Chin, S.C.; Wang, Y.C.; Chen, S.L. Factors affecting patients with concurrent deep neck infection and aspiration pneumonia. Am. J. Otolaryngol. 2022, 43, 103463. [Google Scholar] [CrossRef]
  47. Chen, M.K.; Wen, Y.S.; Chang, C.C.; Lee, H.S.; Huang, M.T.; Hsiao, H.C. Deep neck infections in diabetic patients. Am. J. Otolaryngol. 2000, 21, 169–173. [Google Scholar] [CrossRef]
  48. Chen, S.L.; Chin, S.C.; Wang, Y.C.; Ho, C.Y. Factors Affecting Patients with Concurrent Deep Neck Infection and Lemierre’s Syndrome. Diagnostics 2022, 12, 928. [Google Scholar] [CrossRef]
  49. Chowdhury, N.I.; Smith, T.L.; Chandra, R.K.; Turner, J.H. Automated classification of osteomeatal complex inflammation on computed tomography using convolutional neural networks. Int. Forum Allergy Rhinol. 2019, 9, 46–52. [Google Scholar] [CrossRef]
  50. Benitez, J.M.; Castro, J.L.; Requena, I. Are artificial neural networks black boxes? IEEE Trans. Neural Netw. 1997, 8, 1156–1164. [Google Scholar] [CrossRef] [PubMed]
  51. Tickle, A.B.; Andrews, R.; Golea, M.; Diederich, J. The truth will come to light: Directions and challenges in extracting the knowledge embedded within trained artificial neural networks. IEEE Trans. Neural Netw. 1998, 9, 1057–1068. [Google Scholar] [CrossRef]
Figure 1. Parameters measured on computed tomography (CT) scans. (A,B) The maximum diameter of the abscess was determined based on axial, coronal, and sagittal CT scans. (C,D) The distance between the abscess and upper airway inlet was measured on axial scans. Arrowhead, upper airway inlet; double arrow, distance measured on CT scans.
Figure 1. Parameters measured on computed tomography (CT) scans. (A,B) The maximum diameter of the abscess was determined based on axial, coronal, and sagittal CT scans. (C,D) The distance between the abscess and upper airway inlet was measured on axial scans. Arrowhead, upper airway inlet; double arrow, distance measured on CT scans.
Diagnostics 12 01943 g001
Figure 2. Training and test datasets for the deep learning model.
Figure 2. Training and test datasets for the deep learning model.
Diagnostics 12 01943 g002
Figure 3. Diagram of the k-nearest neighbor model. In (A,B), green dots represent training group patients who underwent tracheostomy, and blue dots represent training patients who did not undergo tracheostomy. Red dots represent test group patients. The dotted line distinguishes cases in which tracheostomy was performed from those in which it was not performed. The circles are the nearest neighbors to test and training group instances.
Figure 3. Diagram of the k-nearest neighbor model. In (A,B), green dots represent training group patients who underwent tracheostomy, and blue dots represent training patients who did not undergo tracheostomy. Red dots represent test group patients. The dotted line distinguishes cases in which tracheostomy was performed from those in which it was not performed. The circles are the nearest neighbors to test and training group instances.
Diagnostics 12 01943 g003
Table 1. Clinical characteristics of the 392 patients with deep neck infection.
Table 1. Clinical characteristics of the 392 patients with deep neck infection.
CharacteristicsN (%)
Gender392 (100.0)
Male261 (66.58)
Female131 (33.42)
Age, years ± SD51.36 ± 18.74
Chief complaint period, days ± SD5.04 ± 4.49
WBC, uL ± SD15,007.39 ± 5801.19
CRP, mg/L ± SD156.94 ± 99.61
Blood sugar, mg/dL ± SD142.66 ± 72.46
Diabetes mellitus147 (37.50)
Deep neck infection space involved
Single space108 (27.55)
Double spaces151 (38.52)
Multiple spaces, ≥3133 (33.93)
Mediastinitis20 (5.10)
Maximum diameter of abscess, cm ± SD6.36 ± 3.08
Nearest distance from abscess to inlet of trachea, cm ± SD1.41 ± 1.35
Tracheostomy performance50 (12.75)
N, numbers; SD, standard deviation; WBC, white blood cell (normal range: 3500–11,000/μL); CRP, C-reactive protein (normal range < 5 mg/L); Blood sugar (normal range: 70–100 mg/dL). Maximum diameter of abscess and nearest distance from abscess to inlet of trachea were evaluated in CT scan.
Table 2. Clinical and computed tomography data of the training and test groups.
Table 2. Clinical and computed tomography data of the training and test groups.
CharacteristicsTraining Group; N (%)Test Group; N (%)p Value
Gender317 (100.0)75 (100.0)
Male215 (67.82)46 (61.33)0.340
Female102 (32.18)29 (38.67)
Age, years ± SD50.88 ± 18.8953.40 ± 18.060.364
Chief complaint period, days ± SD5.20 ± 4.794.34 ± 2.790.455
WBC, μL ± SD14,824.91 ± 5732.7515,778.66 ± 6060.840.240
CRP, mg/L ± SD155.08 ± 98.23164.81 ± 105.520.511
Blood sugar, mg/dL ± SD140.51 ± 70.13151.77 ± 81.460.080
Diabetes mellitus 0.598
Yes121 (38.17)26 (34.66)
No196 (61.83)49 (65.34)
Deep neck infection space involved
Single space92 (29.02)16 (21.33)0.198
Double spaces120 (37.85)31 (41.33)0.599
Multiple spaces, ≥3105 (33.13)28 (37.34)0.499
Mediastinitis 0.557
Yes15 (4.73)5 (6.66)
No302 (95.27)70 (93.34)
Maximum diameter of abscess, cm ± SD6.23 ± 2.916.92 ± 3.710.293
Nearest distance from abscess to inlet of trachea, cm ± SD1.49 ± 1.441.03 ± 0.790.169
Tracheostomy performance 0.700
Yes42 (13.24)8 (10.66)
No275 (86.76)67 (89.34)
N, numbers; SD, standard deviation; WBC, white blood cell (normal range: 3500–11,000/μL); CRP, C-reactive protein (normal range < 5 mg/L); Sugar (normal range: 70–100 mg/dL). Maximum diameter of abscess and nearest distance from abscess to inlet of trachea were evaluated in CT scan.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, S.-L.; Chin, S.-C.; Ho, C.-Y. Deep Learning Artificial Intelligence to Predict the Need for Tracheostomy in Patients of Deep Neck Infection Based on Clinical and Computed Tomography Findings—Preliminary Data and a Pilot Study. Diagnostics 2022, 12, 1943. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12081943

AMA Style

Chen S-L, Chin S-C, Ho C-Y. Deep Learning Artificial Intelligence to Predict the Need for Tracheostomy in Patients of Deep Neck Infection Based on Clinical and Computed Tomography Findings—Preliminary Data and a Pilot Study. Diagnostics. 2022; 12(8):1943. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12081943

Chicago/Turabian Style

Chen, Shih-Lung, Shy-Chyi Chin, and Chia-Ying Ho. 2022. "Deep Learning Artificial Intelligence to Predict the Need for Tracheostomy in Patients of Deep Neck Infection Based on Clinical and Computed Tomography Findings—Preliminary Data and a Pilot Study" Diagnostics 12, no. 8: 1943. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12081943

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop