Next Article in Journal / Special Issue
Implementing AI Models for Prognostic Predictions in High-Risk Burn Patients
Previous Article in Journal
Thymoma: An Overview
Previous Article in Special Issue
Video-Based versus On-Site Neonatal Pain Assessment in Neonatal Intensive Care Units: The Impact of Video-Based Neonatal Pain Assessment in Real-World Scenario on Pain Diagnosis and Its Artificial Intelligence Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Clinical Decision Support System to Detect the Occurrence of Ventilator-Associated Pneumonia in Pediatric Intensive Care

1
Pediatric Intensive Care Unit, Sainte-Justine Hospital, Montreal, QC H3T 1C5, Canada
2
Pediatric and Neonatal Intensive Care Unit, Armand-Trousseau Hospital, Sorbonne University, 75012 Paris, France
3
Research Center, Sainte-Justine Hospital, Montreal, QC H3T 1C5, Canada
4
Pediatric Intensive Care Unit, Caen University Hospital, 14000 Caen, France
5
School of Public Health, Montréal University, Montreal, QC H2X 3E4, Canada
*
Author to whom correspondence should be addressed.
Submission received: 24 August 2023 / Revised: 15 September 2023 / Accepted: 15 September 2023 / Published: 18 September 2023

Abstract

:
Objectives: Ventilator-associated pneumonia (VAP) is a severe care-related disease. The Centers for Disease Control defined the diagnosis criteria; however, the pediatric criteria are mainly subjective and retrospective. Clinical decision support systems have recently been developed in healthcare to help the physician to be more accurate for the early detection of severe pathology. We aimed at developing a predictive model to provide early diagnosis of VAP at the bedside in a pediatric intensive care unit (PICU). Methods: We performed a retrospective single-center study at a tertiary-care pediatric teaching hospital. All patients treated by invasive mechanical ventilation between September 2013 and October 2019 were included. Data were collected in the PICU electronic medical record and high-resolution research database. Development of the clinical decision support was then performed using open-access R software (Version 3.6.1®). Measurements and main results: In total, 2077 children were mechanically ventilated. We identified 827 episodes with almost 48 h of mechanical invasive ventilation and 77 patients who suffered from at least one VAP event. We split our database at the patient level in a training set of 461 patients free of VAP and 45 patients with VAP and in a testing set of 199 patients free of VAP and 20 patients with VAP. The Imbalanced Random Forest model was considered as the best fit with an area under the ROC curve from fitting the Imbalanced Random Forest model on the testing set being 0.82 (95% CI: (0.71, 0.93)). An optimal threshold of 0.41 gave a sensitivity of 79.7% and a specificity of 72.7%, with a positive predictive value (PPV) of 9% and a negative predictive value of 99%, and with an accuracy of 79.5% (95% CI: (0.77, 0.82)). Conclusions: Using machine learning, we developed a clinical predictive algorithm based on clinical data stored prospectively in a database. The next step will be to implement the algorithm in PICUs to provide early, automatic detection of ventilator-associated pneumonia.

1. Introduction

Ventilator-associated pneumonia (VAP) is a common and severe complication in intensive care units. VAP, as a care-related complication leads to a worsening prognosis for the affected patients and its early diagnosis remain an ongoing challenge in intensive care. In an attempt to enhance VAP detection, the Centers for Disease Control (CDC) issued diagnosis criteria allowing the identification of VAP after 48h of clinical alteration (defined by worsening gas exchange, fever >38 °C or hypothermia, leukocytosis >15,000/mm3 or leukopenia <4000/mm3, new onset of purulent sputum, apnea or tachypnea, wheezing/rales/rhonchi, cough and bradycardia <100/min or tachycardia >170/min) [1]. However, delays in VAP diagnosis and, to some extent, in initiating anti-infectious therapy are observed and associated with worse outcomes [2,3,4]. Furthermore, subjective criteria included in the CDC pediatric definition for VAP results in a variability of VAP diagnosis and incidence (changes in the appearance and amount of sputum, worsening of an existing cough) [5,6,7]. To help physicians to prospectively diagnose VAP, the CDC developed the concept of Ventilator-Associated Events (VAE) in adults, but children have long been excluded from this definition [1]. It is usual that for adult recommendations, children are excluded mainly because of physiological differences between populations (normal respiratory parameters for an adult are very different from those of a child). Cirulis et al. [8] proposed a pediatric modified VAE definition. Chomton et al. [9] evaluated the pediatric modified VAE definition to detect VAP, but the sensitivity (66%) to identify this ICU-related complication remained disappointing.
In recent years, the number of publications dealing with the development of computerized clinical decision support systems (CDSS) to improve disease diagnosis increased and was shown to be useful for several disease in ICUs [10,11,12,13,14]. The emergence of high-resolution databases supports these developments [15] which allow for a precise and continuous analysis of clinical and biological parameters. Leisman et al. [16] recently reported several recommendations for the development and reporting of predictive models. They identified two categories of predictive models: (1) clinical prediction models for bedside use, and (2) other prediction models intended for deployment across populations for research, benchmarking, and administrative purposes. The usefulness of CDSS had already been highlighted by Mack et al. [17] but no reports on VAP are available currently. To that effect, our project has been developed with the main objective of developing a predictive model to provide early diagnosis of VAP at the bedside in a pediatric intensive care unit (PICU).

2. Materials and Methods

This single-center retrospective study was performed using the data collected in the PICU electronic medical record (Intelligence Critical Care and Anesthesia (ICCA®); Philips Medical, version F0.1) of a tertiary-care pediatric teaching hospital (Sainte-Justine Hospital, Montréal, QC, Canada). To improve data quality, ICCA® was configured with drop-down menus and critical values alerts. Furthermore, all data entered in ICCA® benefited from a medically-endorsed validation.
The hospital database was queried using SQL Server Management Studio 18® (Microsoft, Redmond, WA, USA) to select patients who were aged from 1 day to 18 years at PICU admission and were mechanically ventilated for more than 48 h, between September 2013 and October 2019. We analyzed the first 30 days of invasive mechanical ventilation.
During the first step of the study, all medical files were reviewed by two senior pediatric intensive care experts (JR and PJ) to classify patients into two groups: VAP patients and free-of-VAP patients. VAP was defined according to the 2021 CDC criteria [1]: The 1st context criteria: invasive mechanical ventilation for more than 48 h, 2nd radiological criteria: new or progressive and persistent infiltrate/consolidation/cavitation, 3rd clinical criteria: worsening gas exchange, fever >38 °C or hypothermia, leukocytosis >15,000/mm3 or leukopenia <4000/mm3, new onset of purulent sputum, apnea or tachypnea, wheezing/rales/rhonchi, cough and bradycardia <100/min or tachycardia >170/min.
The second step of the study consisted in the extraction of data coming from the electronic medical record (ICCA®, Philips, Toronto, ON, Canada) and high-resolution database (database collecting and storing data from medical devices in real time) [15]. The queried data were date, time, weight (kg), white blood cell count (/mm3), neutrophil count (/mm3), partial pressure of carbon dioxide (PaCO2 in mmHg), partial pressure of oxygen (PaO2 in mmHg), inspired fraction of oxygen (FiO2 in %), positive end-expiratory pressure (PEEP in cmH2O), peak inspiratory pressure (PIP in cmH2O), mean airway pressure (MAwP in cmH2O), respiratory rate (/rpm), tidal volume (mL), subjective amount of respiratory tract secretion (0, +, ++, +++), oxygenation (OI) and oxygen saturation index (OSI) [18], calculated pulmonary dynamic compliance (in barometric ventilation mode: tidal volume/(PIP–PEEP); and in volumetric ventilation mode: tidal volume/(peak pressure–PEEP)). We also gathered PIM 2 [19] and PELOD-2 scores [20,21].
Data formatting. The data was formatted using R (version 3.6.1) as a preparation step to train the prediction models based on different algorithms.
All times were expressed as a relative duration since ICU admission.
Data cleaning and Missing data. Incoherent data were identified and corrected according to the scheme described in Supplementary Data S1 “Data Cleaning”. Variables consisting of data streams of continuous values were imputed following the last observation carried forward method. For missing data at the beginning of the stream, the first valid observation was carried backward.
Segmenting Variables in Time Blocks. The variables data streams were first segmented into time blocks of 6 h and then for each variable the median (mode for the discrete variable) was calculated over each 6 h time block to avoid aberrant or missing data. Then, the 6 h blocks were aggregated into 48 h time blocks. We chose to aggregate into 48 h time blocks to be as close as possible to the actual VAP timing definition. For each variable, two columns were generated. One consisted of the first non-missing value among the 6 h time blocks and the other one the last non-missing value among the 6 h time blocks, if there was any, in each 48 h time block (if there was no observation, the data was considered missing). For the development of the algorithms, for each variable, the first non-missing values and the actual difference or relative change of the values of the two columns were considered (more details are available in Supplementary Data S2 “Segmenting variables in time blocks”).
Stratified train-test split at a patient level. VAP patients and non-VAP patients were split into the training set (70% of each class) and the testing set (remaining 30% of each class). Since some patients had more than one stay in the PICU, all stays of a patient in the training set were kept in the training set (and the same for patients in the testing set). All details for the train-test split are available in Supplementary Data S3 “train-test split”.
Imputation. Preliminary inspection of the dataset showed that around 50% of data was missing for the variables “pulmonary dynamic compliance” and “minute ventilation”. Missing values imputation in the training dataset was performed by ‘randomForest’ (v4.6-14) with the function ‘rfImpute’ [22]. The imputed values were the weighted average of the non-missing observations, where the weights were the proximities from randomForest. For data in the testing set, the missing values in each variable were replaced by the mean of the imputed values for the variables with missing values in the training set (more details are available in Supplementary Data S4 “Imputation”).
Predictive models. We applied six different learning algorithms to generate predictive models. The algorithms were: Random Forest with the function ‘rfsrc’ and error rate as the measure of performance, Imbalanced Random Forest with the function ‘imbalanced’ and G-means as the measure of performance, Stepwise Regression and Random Forest using 5-fold cross validation (5-CV) with the ‘train’ function; ‘glmStepAIC’ and ‘rf’ methods and accuracy were used to select the optimal model using the largest value [23]. Finally, we implemented Elastic Net Regression (5-CV) and Weighted Elastic Net Regression (5-CV) with the ‘glmnet’ method and ROC was used to select the optimal model using the largest value. The hyperparameters for the Random Forest, Imbalanced Random Forest and stepwise regression (5-CV) algorithms were ‘ntree’ (number of trees used at the tuning step) and ‘mtry’ (number of variables randomly selected as candidates for the division of a node) [24]. The parameters in Elastic Net regression were alpha, which controls the relative balance between the lasso and ridge regularization, and lambda, which controls the amount of the penalty. All these models used readily available implementations in R [25,26]. Here, cross-validation was performed inside the training set only (more details are available in Supplementary Data S5 “Predictive models”).
Performance measure and model choice. Models resulting from the different algorithms were evaluated, at the level of 48 h time blocks, on the train and the test set by calculating their AUC score and by determining classification thresholds reaching predetermined levels of sensitivity (80%, 85%, 90%, 95%). The final model was chosen based on the capacity to [1] maximize specificity under these sensitivity levels, and [2] generalize the sensitivity and specificity from the test set. The area under the ROC curve (AUC) was considered as the primary measure of performance to choose the best model.
Per patient validation. The final model was evaluated on its capacity to correctly assess the infection status of patients over time. The predictions’ results obtained after setting different classification thresholds were taken. The number of patients with accurate predictions (i.e., predicted class = observed VAP status) and inaccurate predictions (i.e., predicted class ≠ observed VAP status) were computed over time. The number of patients for whom the predictions contained at least one error were identified. We looked at the accuracy of predictions by stratifying patients into two groups. We identified the patients for whom the predictions contained at least one error for each subgroup. The global error rates were calculated for each subgroup.

Statistics

Development of the clinical decision support was performed using open-access R software, Version 3.6.1® (R Foundation for Statistical Computing, Vienna, Austria). Statistical analysis of patients’ characteristics was performed using Prism X® software (version 7.05) (GraphPad Inc. San Diego, CA, USA). Kolmogorov analysis was performed to test the normal distribution of continuous variables. Population description used categorical variables expressed as frequency with corresponding proportion and quantitative variables presented as mean and standard deviation. Performance evaluation was conducted using ROC curves, AUC and their confidence intervals, and derived measures of sensitivity and specificity. The ethical committee of Sainte-Justine University hospital approved the study and waived the need for informed consent given the retrospective design.
The Saint-Justine ethical committee approved the study as a retrospective study and waived the need for written consent (n°2020–2454).

3. Results

3.1. General Description of the Population

A total of 5153 children had been hospitalized in Saint-Justine PICU during the study period of which 40% (2077) were mechanically ventilated and 1235 episodes with more than 48 h of mechanical invasive ventilation were identified (Figure 1). Seventy-seven patients had at least one VAP event. Seventy-eight VAP events (6%) were diagnosed by two experts. The patients’ general characteristics are described in Table 1.
Patients with less than 4 days of mechanical ventilation were removed (see Supplementary Data S2 “Segmenting variables”) to achieve 811 episodes of invasive mechanical ventilation. The training set (70% of each class) and testing set (remaining 30% of each class), resulted in a training set of 461 patients free of VAP and 45 patients with VAP and in a testing set of 199 patients free of VAP and 20 patients with VAP. Since some patients had more than one stay in the ICU, there could be different events for the same patient. The training set thus had 513 stays with no VAP event and 45 stays with a VAP event, and the testing set had 231 stays with no VAP event and 22 stays with a VAP event. The segmenting of variables in 48 h non-overlapping time blocks generated, from these datasets, 1852 time blocks free of VAP and 45 time blocks with VAP in the training set, and 788 time blocks free of VAP and 22 time blocks with VAP in the testing set.
We observed similar characteristics in the train and test groups (Table 2).

3.2. Missing Data

We observed two missing values for “sf ratio” and “oxygen saturation index (OSI)” in the test set (0.1% of total observations). For the variable “pulmonary dynamic compliance” the proportion of missing values in the train and test sets were 0.49 and 0.54, respectively. For the variable “minute ventilation”, the proportion of missing values in the train and test sets were 0.49 and 0.54, respectively.

3.3. Results of Training Algorithm

The Imbalanced Random Forest model was considered as the best fit with an area under the ROC curve of 0.86 from the train set.
Thresholds and specificities corresponding to the predetermined levels of sensitivity are presented in Table 3. Variable importance obtained from the Imbalanced Random Forest model are presented in Figure 2.

3.4. Performance on Test Dataset

The area under the ROC curve from fitting the Imbalanced Random Forest model on the test set was 0.82 (95% CI: (0.71, 0.93)) (Figure 3).
The specificity and sensitivity obtained after setting different classification thresholds are presented in Table 4. An optimal threshold of 0.41 gave a sensitivity of 79.7% and a specificity of 72.7%, with a positive predictive value (PPV) of 9% and a negative predictive value of 99%, with an accuracy of 79.5% (95% CI: (0.77, 0.82)).

3.5. Per Patient Validation

Performance of the final model was evaluated over different time periods. Time periods were defined starting from the first time block and going up to a given time block in the future. The confusion matrices for all the time periods were constructed. False positive rates (FPR), true positive rates (TPR), and area under the curve (AUC) were calculated. The results are presented in Figure 4. The procedure is explained in detail in Supplementary Data S6 “Per patient validation”.
The global error rate is presented in Table 5. We observed a lower error rate for patients with at most three time blocks of observations, compared to the ones with at least four time blocks of observations.

4. Discussion

Using an electronic medical record, an algorithm supporting clinicians in the early diagnosis of ventilator-associated pneumonia in PICU had a sensitivity of 80% and specificity of 73%, with the threshold of 0.41. To date, it is the most accurate sensitivity achieved by a CDSS system to provide early detection of VAP.
Ventilator-associated pneumonias is a severe health care disease [2,27,28]. To improve the delay and accuracy of this challenging diagnosis, Cirulis et al. [8] evaluated the accuracy of adults’ ventilator-associated events (VAE) to early diagnose pediatric VAP and developed modified pediatric criteria for VAE (increase in FiO2 by 20% or PEEP by 2 cm H2O sustained for more than one day). VAE and modified pediatric VAE both had a disappointing sensitivity of 23% and 56% for Cirulis et al. [8] and 56% and 66% for Chomton et al. [9], respectively. Our algorithm was based on machine learning methods and improved the sensitivity in this study and could be implemented to screen in real-time patient’s data to provide early detection of VAP in children. The prediction of the test set using the Imbalanced Random Forest model is stored in a file and is available on Github [29].
Implementation of a clinical decision system to help physicians is a promising technology aimed at helping the physician to take medical decision [10,30], to analyze chest X-rays [31], or to increases diagnosis sensitivity [32]. The development methodology starts with a retrospective classification of analyzed patients to define whether they develop the studied conditions (e.g.,VAP). This step is crucial to develop an accurate algorithm and rely on the quality of the classification method. In a large review of published CDSS, Ostropolets et al. [33] highlight that only one manuscript addressed confounding and bias due to misclassification. Our classification methodology included all the relevant data from the electronic medical record clinically collected and is the best accuracy that can be obtained currently.
In addition to the classification methodology, the main strength of this study includes the use of continuous vital signs and the ventilatory parameters monitoring database, limiting the number of missing data and allowing the use of the algorithm in real time in the future [15]. The variables extracted from this monitoring included the OSI ratio, the variation of pulmonary compliance, minute ventilation, and ventilatory median pressures. However, the algorithm identified the variation of PEEP during the last 48 h preceding the VAP as the most important criteria as suggested by the CDC definition. Nevertheless, the variation of the ventilatory mean airway pressure was the second most important variable. This result seems crucial because the ventilatory mean airway pressure that not only includes PEEP but also the PIP, I/E ratio and instantaneous gas flow is not included in the CDC diagnosis criteria for VAP.
Nonetheless, we noticed that our algorithm has a better efficacy to diagnose early VAP (before day 6 of the PICU stay) versus late VAP (after day 6 of the PICU stay), with the error rate in prediction of 23.08% vs. 66.67%, respectively. We can hypothesize that the more time the patient stays in the PICU, the more discrete are the variations to be detected due to the potential alteration of the patient’s condition.
This study has several strengths. First, this is the first study with a CDSS system reaching over 80% sensitivity. Second, despite this being a single-center study, we report one of the largest number of patients included in a study in children. Finally, we report the highest sensitivity and specificity to diagnose VAP.
Despite these promising results, this work suffers from several limitations. First, the invasive procedures were not considered in our algorithm (bronchoscopy, transportations) due to the lack of data concerning the timing between these procedures and the VAP. Second, data on the reason for invasive mechanical ventilation were not reported in all medical files, although it is well known that brain injury and neurological disorder with impaired swallowing predispose more to pneumonia. Third, the treatment of missing data was conducted using data-focused approaches (last observation carried forward for missing data mid-stream, first observation carried backward for data missing at the beginning of a stream) which did not model the missing data process; the classifications between the VAP and non-VAP patients were retrospectively performed which may have resulted in some misinterpretation of the clinical data. Fourth, for generalizability, a prospective validation of the algorithm in several PICU needs to be conducted.

5. Conclusions

We developed the first clinical predictive system dedicated to VAP diagnosis in PICUs using a high-fidelity database. The implementation of such an algorithm in PICUs could allow physicians to be alerted early in cases of respiratory function impairment and to decide whether to perform respiratory tract analysis and start anti-infective treatment. Although this algorithm achieves a promising sensitivity and specificity level, it is still lacking power and cannot be implanted in PICUs. Additionally, it still needs to be prospectively validated in other PICUs to confirm its reproductivity and external power.

Supplementary Materials

The following supporting information can be downloaded at: https://0-www-mdpi-com.brum.beds.ac.uk/article/10.3390/diagnostics13182983/s1, Supplementary Data S1 “Data Cleaning”; Supplementary Data S2 “Segmenting variables in time blocks”; Supplementary Data S3 “train-test split”; Supplementary Data S4 “Imputation”; Supplementary Data S5 “Predictive models”; Supplementary Data S6 “Per patient validation”.

Author Contributions

Conceptualization, M.C., P.J. and J.R.; Methodology, J.R., S.A.O. and M.S. (Michael Sauthier); Software, S.A.O., M.S. (Michael Sauthier) and S.D.M.; Formal analysis, M.S. (Michael Sauthier) and S.D.M.; Investigation, J.R.; Data curation, M.S. (Masoumeh Sajedi) and S.D.M.; Writing—original draft, J.R.; Writing—review & editing, J.R. and M.C.; Supervision, P.J.; Project administration, P.J.; Funding acquisition, P.J. All authors have read and agreed to the published version of the manuscript.

Funding

J.R. received a studentship from the scholarship supplement program from the Quebec Respiratory Health Research Network. This work was supported in part by the Quebec Respiratory Health Research Network. J.P.’s scientific research funds from the Canadian Foundation for Innovation, Fonds de Recherche Québec, Quebec Ministry of Health and Sainte-Justine Hospital.

Institutional Review Board Statement

The Saint-Justine ethical committee approved the study as a retrospective study and waived the need for written consent (n°2020–2454).

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available on demand.

Acknowledgments

The authors want to thank the research center of Sainte-Justine University hospital and for their support.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AUCarea under the curve
CDCcenters for disease control
CDSSclinical decision support systems
FiO2inspired fraction of oxygen
FPRfalse positive rates
ICCAIntelligence Critical Care and Anesthesia®
MAwPmean airway pressure
OIoxygenation index
OSIoxygenation and saturation index
PaCO2partial pressure of carbon dioxide
PaO2partial pressure of oxygen
PELOD-2pediatric logistic organ dysfunction score
PEEPpositive end-expiratory pressure
PICUpediatric intensive care unit
PIM 2pediatric index of mortality
PIPpeak inspiratory pressure
ROCreceiving operator curve
TPRtrue positive rates
VAEventilator-associated event
VAPventilator-associated pneumonia

References

  1. Center for Disease Control. 2018. Available online: https://www.cdc.gov/nhsn/PDFs/pscManual/6pscVAPcurrent.pdf (accessed on 14 September 2023).
  2. Chastre, J.; Fagon, J.-Y. Ventilator-associated pneumonia. Am. J. Respir. Crit. Care Med. 2002, 165, 867–903. [Google Scholar] [CrossRef] [PubMed]
  3. Gutiérrez, J.M.M.; Borromeo, A.R.; Dueño, A.L.; Paragas, E.D.; Ellasus, R.O.; Abalos-Fabia, R.S.; Abriam, J.A.; Sonido, A.E.; Hernandez, M.A.; Generale, A.J.A.; et al. Clinical epidemiology and outcomes of ventilator-associated pneumonia in critically ill adult patients: Protocol for a large-scale systematic review and planned meta-analysis. Syst. Rev. 2019, 8, 180. [Google Scholar] [CrossRef] [PubMed]
  4. Papazian, L.; Klompas, M.; Luyt, C.-E. Ventilator-associated pneumonia in adults: A narrative review. Intensive Care Med. 2020, 46, 888–906. [Google Scholar] [CrossRef]
  5. Tullu, M.S.; Balasubramanian, P. Ventilator-Associated Pneumonia in Pediatric Intensive Care Unit: Correspondence. Indian J. Pediatr. 2015, 82, 662–663. [Google Scholar] [CrossRef]
  6. Shaath, G.A.; Jijeh, A.; Faruqui, F.; Bullard, L.; Mehmood, A.; Kabbani, M.S. Ventilator-associated pneumonia in children after cardiac surgery. Pediatr. Cardiol. 2014, 35, 627–631. [Google Scholar] [CrossRef]
  7. Ericson, J.E.; McGuire, J.; Michaels, M.G.; Schwarz, A.; Frenck, R.; Deville, J.G.; Agarwal, S.; Bressler, A.M.; Gao, J.; Spears, T.; et al. Hospital-acquired Pneumonia and Ventilator-associated Pneumonia in Children: A Prospective Natural History and Case-Control Study. Pediatr. Infect. Dis. J. 2020, 39, 658–664. [Google Scholar] [CrossRef] [PubMed]
  8. Cirulis, M.M.; Hamele, M.T.; Stockmann, C.R.; Bennett, T.D.; Bratton, S.L. Comparison of the New Adult Ventilator-Associated Event Criteria to the Centers for Disease Control and Prevention Pediatric Ventilator-Associated Pneumonia Definition (PNU2) in a Population of Pediatric Traumatic Brain Injury Patients. Pediatr. Crit. Care Med. 2016, 17, 157–164. [Google Scholar] [CrossRef]
  9. Chomton, M.; Brossier, D.; Sauthier, M.; Vallières, E.; Dubois, J.; Emeriaud, G.; Jouvet, P. Ventilator-Associated Pneumonia and Events in Pediatric Intensive Care: A Single Center Study. Pediatr. Crit. Care Med. 2018, 19, 1106–1113. [Google Scholar] [CrossRef] [PubMed]
  10. Giannini, H.M.; Ginestra, J.C.; Chivers, C.; Draugelis, M.; Hanish, A.; Schweickert, W.D.; Fuchs, B.D.; Meadows, L.; Lynch, M.; Donnelly, P.J.; et al. A Machine Learning Algorithm to Predict Severe Sepsis and Septic Shock: Development, Implementation, and Impact on Clinical Practice. Crit. Care Med. 2019, 47, 1485–1492. [Google Scholar] [CrossRef]
  11. Chen, C.-H.; Lee, Y.-W.; Huang, Y.-S.; Lan, W.-R.; Chang, R.-F.; Tu, C.-Y.; Chen, C.-Y.; Liao, W.-C. Computer-aided diagnosis of endobronchial ultrasound images using convolutional neural network. Comput. Methods Programs Biomed. 2019, 177, 175–182. [Google Scholar] [CrossRef] [PubMed]
  12. Roggeveen, L.F.; Guo, T.; Driessen, R.H.; Fleuren, L.M.; Thoral, P.; van der Voort, P.H.J.; Girbes, A.R.J.; Bosman, R.J.; Elbers, P. Right Dose, Right Now: Development of AutoKinetics for Real Time Model Informed Precision Antibiotic Dosing Decision Support at the Bedside of Critically Ill Patients. Front. Pharmacol. 2020, 11, 646. [Google Scholar] [CrossRef] [PubMed]
  13. Lauritsen, S.M.; Kalør, M.E.; Kongsgaard, E.L.; Lauritsen, K.M.; Jørgensen, M.J.; Lange, J.; Thiesson, B. Early detection of sepsis utilizing deep learning on electronic health record event sequences. Artif. Intell. Med. 2020, 104, 101820. [Google Scholar] [CrossRef]
  14. Wulff, A.; Haarbrandt, B.; Tute, E.; Marschollek, M.; Beerbaum, P.; Jack, T. An interoperable clinical decision-support system for early detection of SIRS in pediatric intensive care using openEHR. Artif. Intell. Med. 2018, 89, 10–23. [Google Scholar] [CrossRef]
  15. Brossier, D.; El Taani, R.; Sauthier, M.; Roumeliotis, N.; Emeriaud, G.; Jouvet, P. Creating a High-Frequency Electronic Database in the PICU: The Perpetual Patient. Pediatr. Crit. Care Med. 2018, 19, e189–e198. [Google Scholar] [CrossRef] [PubMed]
  16. Leisman, D.E.; Harhay, M.O.; Lederer, D.J.; Abramson, M.; Adjei, A.A.; Bakker, J.; Ballas, Z.K.; Barreiro, E.; Bell, S.C.; Bellomo, R.; et al. Development and Reporting of Prediction Models: Guidance for Authors From Editors of Respiratory, Sleep, and Critical Care Journals. Crit. Care Med. 2020, 48, 623–633. [Google Scholar] [CrossRef]
  17. Mack, E.H.; Wheeler, D.S.; Embi, P.J. Clinical decision support systems in the pediatric intensive care unit. Pediatr. Crit. Care Med. 2009, 10, 23–28. [Google Scholar] [CrossRef]
  18. DesPrez, K.; McNeil, J.B.; Wang, C.; Bastarache, J.A.; Shaver, C.M.; Ware, L.B. Oxygenation Saturation Index Predicts Clinical Outcomes in ARDS. Chest 2017, 152, 1151–1158. [Google Scholar] [CrossRef]
  19. Slater, A.; Shann, F.; Pearson, G.; Paediatric Index of Mortality (PIM) Study Group. PIM2: A revised version of the Paediatric Index of Mortality. Intensive Care Med. 2003, 29, 278–285. [Google Scholar] [CrossRef] [PubMed]
  20. Leteurtre, S.; Duhamel, A.; Salleron, J.; Grandbastien, B.; Lacroix, J.; Leclerc, F.; Groupe Francophone de Réanimation et d’Urgences Pédiatriques (GFRUP). PELOD-2: An update of the PEdiatric logistic organ dysfunction score. Crit. Care Med. 2013, 41, 1761–1773. [Google Scholar] [CrossRef]
  21. Sauthier, M.; Landry-Hould, F.; Leteurtre, S.; Kawaguchi, A.; Emeriaud, G.; Jouvet, P. Comparison of the Automated Pediatric Logistic Organ Dysfunction-2 Versus Manual Pediatric Logistic Organ Dysfunction-2 Score for Critically Ill Children. Pediatr. Crit. Care Med. 2020, 21, e160–e169. [Google Scholar] [CrossRef]
  22. Breiman, L. Breiman and Cutler’s Random Forests for Classification and Regression, R package version 4.6-14 [Internet]. 2018. Available online: https://cran.r-project.org/web/packages/randomForest/randomForest.pdf (accessed on 14 September 2023).
  23. Available online: https://cran.r-project.org/web/packages/randomForestSRC/randomForestSRC.pdf (accessed on 14 September 2023).
  24. Chen, C.; Liaw, A.; Breiman, L. Using Random Forest to Learn Imbalanced Data; University of California: Los Angeles, CA, USA, 2004. [Google Scholar]
  25. Ishwaran, H. Fast Unified Random Forests for Survival, Regression, and Classification (RF-SRC). R Package Version 2.9.3. [Internet]. 2020. Available online: https://cran.rproject.org/web/packages/randomForestSRC/randomForestSRC.pdf (accessed on 14 September 2023).
  26. Kuhn, M. Training on Classification and Regression, R Package Version 6.0.86. Available online: https://cran.r-project.org/web/packages/caret/caret.pdf (accessed on 14 September 2023).
  27. Cernada, M.; Aguar, M.; Brugada, M.; Gutiérrez, A.; López, J.L.; Castell, M.; Vento, M. Ventilator-associated pneumonia in newborn infants diagnosed with an invasive bronchoalveolar lavage technique: A prospective observational study. Pediatr. Crit. Care Med. 2013, 14, 55–61. [Google Scholar] [CrossRef] [PubMed]
  28. Elward, A.M.; Warren, D.K.; Fraser, V.J. Ventilator-associated pneumonia in pediatric intensive care unit patients: Risk factors and outcomes. Pediatrics 2002, 109, 758–764. [Google Scholar] [CrossRef] [PubMed]
  29. Sajedi, M. VAP-Predictive-Model, GitHub repository [Internet]. 2021. Available online: https://github.com/SajediM/VAP-Predictive-Model (accessed on 30 September 2020).
  30. Jouvet, P.A.; Payen, V.; Gauvin, F.; Emeriaud, G.; Lacroix, J. Weaning children from mechanical ventilation with a computer-driven protocol: A pilot trial. Intensive Care Med. 2013, 39, 919–925. [Google Scholar] [CrossRef]
  31. Zaglam, N.; Jouvet, P.; Flechelles, O.; Emeriaud, G.; Cheriet, F. Computer-aided diagnosis system for the Acute Respiratory Distress Syndrome from chest radiographs. Comput. Biol. Med. 2014, 52, 41–48. [Google Scholar] [CrossRef] [PubMed]
  32. Mazo, C.; Kearns, C.; Mooney, C.; Gallagher, W.M. Clinical Decision Support Systems in Breast Cancer: A Systematic Review. Cancers 2020, 12, 369. [Google Scholar] [CrossRef] [PubMed]
  33. Ostropolets, A.; Zhang, L.; Hripcsak, G. A scoping review of clinical decision support tools that generate new knowledge to support decision making in real time. J. Am. Med. Inform. Assoc. 2020, 27, 1968–1976. [Google Scholar] [CrossRef]
Figure 1. Flow chart. VAP: Ventilator-associated event.
Figure 1. Flow chart. VAP: Ventilator-associated event.
Diagnostics 13 02983 g001
Figure 2. Variable importance used in the clinical decision system.
Figure 2. Variable importance used in the clinical decision system.
Diagnostics 13 02983 g002
Figure 3. ROC Curve. Black curve represent the efficiency of the training of the algorithm on 2/3 of the dataset. Red curve represents the efficiency of the test of the algorithm on the rest of the data set.
Figure 3. ROC Curve. Black curve represent the efficiency of the training of the algorithm on 2/3 of the dataset. Red curve represents the efficiency of the test of the algorithm on the rest of the data set.
Diagnostics 13 02983 g003
Figure 4. False positive rate and true positive rate over different time periods for different thresholds. Th.Default: default threshold of the model; Th80%: threshold correspond to the 80% sensitivity; Th85%: threshold correspond to the 85% sensitivity.
Figure 4. False positive rate and true positive rate over different time periods for different thresholds. Th.Default: default threshold of the model; Th80%: threshold correspond to the 80% sensitivity; Th85%: threshold correspond to the 85% sensitivity.
Diagnostics 13 02983 g004
Table 1. Population characteristics.
Table 1. Population characteristics.
Population CharacteristicsGlobal Population
(N: 827)
VAP Patients
(N: 77)
No VAP Patients
(N: 750)
p:
Weight (kg)15.8 ± 1.620.99 ± 2.715.25 ± 0.70.01
Age (days)1308 ± 19041806 ± 2501256 ± 690.02
Gender male (%)475 (57%)41 (53%)434 (58%)0.4
Pelod 2 score10.1 ± 4.810.4 ± 0.69.9 ± 0.20.47
Pelod 2 mortality risk (%)0.3 ± 0.30.3 ± 0.10.2 ± 0.010.15
Bronchoscopie (%)70 (8%)14 (18%)56 (8%)0.04
Neuromuscular blocker (%)279 (34%)43 (55%)236 (31%)<0.0001
Mechanical Ventilation duration (days)12.5 ± 30.929.3 ± 5.110.9 ± 1.5<0.0001
PICU length of stay (days)26.1 ± 52.548.3 ± 7.123.4 ± 1.8<0.0001
Survival rate (%)740 (90%)65 (84%)675 (90%)0.16
PICU: Pediatric intensive care unit; VAP: Ventilator-associated pneumonia.
Table 2. Train and test groups’ characteristics.
Table 2. Train and test groups’ characteristics.
Train and Test Groups CharacteristicsTest Group (n: 261)Train Group (n: 572)p:
Weight (kg)16.9 ± 1.315.6 ± 0.80.40
Age (days)1387 ± 1291268 ± 840.43
Gender male, (n, %)146 (60)284 (58)0.69
Pelod 2 score10.4 ± 0.29.7 ± 0.50.16
Pelod 2 mortality risk (%)0.3 ± 0.10.2 ± 0.10.13
Proportion of VAP patients (n, %)25 (10)50 (10)0.99
Length of mechanical ventilation before VAP (days)9.9 ± 2.79.6 ± 1.90.66
Length of mechanical ventilation duration (days)12.1 ± 1.611.2 ± 1.10.64
PICU length of stay (days)21.3 ± 2.422.3 ± 2.00.81
Pelod: Pediatric logistic organ dysfunction, PICU: Pediatric intensive care unit; VAP: Ventilator-associated pneumonia.
Table 3. Imbalanced Random Forest model. Threshold and specificity from predetermined sensitivity for the train set.
Table 3. Imbalanced Random Forest model. Threshold and specificity from predetermined sensitivity for the train set.
ThresholdSpecificitySensitivity
0.410.790.80
0.290.640.87
0.250.580.91
0.220.520.96
Table 4. Imbalanced Random Forest model. Sensitivity and specificity for the test set corresponding to different thresholds.
Table 4. Imbalanced Random Forest model. Sensitivity and specificity for the test set corresponding to different thresholds.
ThresholdSpecificitySensitivity
0.410.7970.73
0.280.660.77
0.250.590.77
0.220.530.82
Table 5. Error rates (%) for predicted classes.
Table 5. Error rates (%) for predicted classes.
G1 G2
ER.PredER.Pred.th80ER.Pred.th85ER.PredER.Pred.th80ER.Pred.th85
All 11.56 19.60 31.66 79.59 83.67 95.92
VAP 23.08 23.08 23.08 66.67 66.67 88.89
NoVAP 10.75 19.35 32.26 82.50 87.50 97.50
G1: Patients with at most 3 time blocks of observations; G2: Patients with at least 4 time-blocks of observations. E.Pred: Error rate for prediction; E.Pred.the80%: Error rate for prediction with threshold correspond to the 80% sensitivity; E.Pred.th85%: Error rate for prediction with threshold correspond to the 85% sensitivity.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rambaud, J.; Sajedi, M.; Al Omar, S.; Chomtom, M.; Sauthier, M.; De Montigny, S.; Jouvet, P. Clinical Decision Support System to Detect the Occurrence of Ventilator-Associated Pneumonia in Pediatric Intensive Care. Diagnostics 2023, 13, 2983. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13182983

AMA Style

Rambaud J, Sajedi M, Al Omar S, Chomtom M, Sauthier M, De Montigny S, Jouvet P. Clinical Decision Support System to Detect the Occurrence of Ventilator-Associated Pneumonia in Pediatric Intensive Care. Diagnostics. 2023; 13(18):2983. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13182983

Chicago/Turabian Style

Rambaud, Jerome, Masoumeh Sajedi, Sally Al Omar, Maryline Chomtom, Michael Sauthier, Simon De Montigny, and Philippe Jouvet. 2023. "Clinical Decision Support System to Detect the Occurrence of Ventilator-Associated Pneumonia in Pediatric Intensive Care" Diagnostics 13, no. 18: 2983. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics13182983

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop