Next Article in Journal
The Effectiveness and Quality of Life Outcomes by Transoral Endoscopic Vestibular Thyroidectomy Using Intraoperative Indocyanin Green Fluorescence Imaging and Neuromonitoring—A Cohort Study
Previous Article in Journal
Adaptation and Psychometric Evidence of the ARABIC Version of the Diabetes Self-Management Questionnaire (A-DSMQ)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Opinion

Artificial Intelligence in NICU and PICU: A Need for Ecological Validity, Accountability, and Human Factors

by
Avishek Choudhury
1,* and
Estefania Urena
2
1
Industrial and Management Systems Engineering, West Virginia University, Morgantown, WV 26506, USA
2
Registered Nurse, Intensive Critical Unit, Lincoln Medical and Mental Health Centre, New York, NY 10451, USA
*
Author to whom correspondence should be addressed.
Submission received: 27 April 2022 / Revised: 17 May 2022 / Accepted: 19 May 2022 / Published: 21 May 2022

Abstract

:
Pediatric patients, particularly in neonatal and pediatric intensive care units (NICUs and PICUs), are typically at an increased risk of fatal decompensation. That being said, any delay in treatment or minor errors in medication dosage can overcomplicate patient health. Under such an environment, clinicians are expected to quickly and effectively comprehend large volumes of medical information to diagnose and develop a treatment plan for any baby. The integration of Artificial Intelligence (AI) into the clinical workflow can be a potential solution to safeguard pediatric patients and augment the quality of care. However, before making AI an integral part of pediatric care, it is essential to evaluate the technology from a human factors perspective, ensuring its readiness (technology readiness level) and ecological validity. Addressing AI accountability is also critical to safeguarding clinicians and improving AI acceptance in the clinical workflow. This article summarizes the application of AI in NICU/PICU and consecutively identifies the existing flaws in AI (from clinicians’ standpoint), and proposes related recommendations, which, if addressed, can improve AIs’ readiness for a real clinical environment.

1. Artificial Intelligence in Pediatrics

With increasing healthcare infrastructure and connected medical databases, clinicians have more data to inform clinical decision-making than ever before. However, when confronted with information beyond the scope of their expertise and in excessive quantities, they are likely to resort to boundedly rational and, in some cases, incorrect diagnoses. One way to support complex clinical processes is to leverage Artificial Intelligence (AI) technologies, often known as AI-based clinical decision support systems. As portrayed by the media, AI comes with surprising capabilities in healthcare. AI can be broadly defined as an intelligent system capable of performing human-like activities based on retrospective data. A typical AI system encompasses predefined rules, if–then statements, or is powered by dynamic statistical models that are proficient in capturing non-linear relationships among several variables. More recently, wide arrays of unique AI technologies have been developed to augment the healthcare system. The US Food and Drug Administration (FDA) has approved several AI-based products, signaling the gradual integration of AI into healthcare [1,2].
Pediatric patients are typically at an increased risk of fatal decompensation and are sensitive to medications. That being said, any delay in treatment or minor errors in medication dosage can overcomplicate patient health. Under such an environment, clinicians are expected to quickly and effectively comprehend large volumes of medical information to diagnose and develop a treatment plan for a given baby. Being one of the most complex and sensitive healthcare domains, neonatal and pediatric intensive care units (NICUs and PICUs) are ideal environments for AI use, where doctors and nurses can leverage AIs’ computational capabilities to make well-informed and faster clinical decisions. The use of AI in pediatrics was first recorded in 1968 when Paycha developed SHELP, a computer-assisted medical decision-making system that diagnosed inborn errors of metabolism [3]. Soon after, Shortliffe developed an expert system named Mycin and identified bacteria causing severe blood infections among pediatric patients [4]. Since then, as AI has developed, several randomized controlled trials have used the technology for various issues in pediatrics. For instance, a study implemented an automated AI-based decision support system to control glucose levels effectively and safely among pediatric patients [5]. Another study developed an AI-based wearable device known as the Superpower Glass to augment the social outcomes of children with autism [6]. A study conducted in China successfully developed an AI-based disease risk prediction model for newborn babies with inherited metabolic diseases [7]. A study reported a significant improvement in neurocognitive performance among children when an AI-based cognitive stimulation therapy was implemented [8]. Besides clinical trials, several other AI technologies have been developed that play an active role in neonatal and pediatric ICUs. For example, AI-based models have been used in the NICU to predict birth asphyxia [9,10] and neonatal seizures [11], as well as to diagnose neonatal sepsis [12,13] and respiratory distress syndrome [14]. Table 1 gives a snapshot of the various applications of AI in NICU and PICU.
Overall, different studies have used AI either to directly improve patient health by allowing physicians “spend more time in direct patient care [while reducing provider burnout]” [23] or to augment clinical processes thus improving patient health indirectly. For instance, a study conducted in California reported AI’s efficiency in identifying critically ill PICU patients with an underlying genetic disorder [24]. A study in Spain used AI-driven music to reduce stress levels among neonates [25]. Several studies used AI algorithms to develop an early warning system that provided a timely detection of changes in health status and the development of critical illness [12,15,17,19,26,27,28] and pathologic eye disease progression in preterm infants [29]. A recent review also reported several ‘indirect impacts’ of AI on the pediatric patient [30,31], where AI was noted to augment clinical decision-making and diagnostic accuracy in the pediatric setting [21,24,28].

2. Current Challenges Preventing AI Application

Despite all the evidence supporting AI in pediatrics, its use and adoption have been limited. Even though no studies thus far have associated AI with worsened health outcomes or patient harm in a pediatric, why do doctors and healthcare management hesitate to integrate AI into their clinical workload? Of all possible reasons hindering the acceptance of AI in pediatrics, (a) the lack of ecological validity, and (b) low technology readiness level, two inter-related factors, along with the (c) lack of AIs’ accountability, seem to be prominent determinants that have not been sufficiently acknowledged in the literature.

2.1. Ecological Validity—Can the User Use AI Effectively and Safely?

As depicted in several studies, AI systems and technologies may facilitate a personalized approach to pediatric care by augmentation of diagnostic processes. The AI-based solution has the power to reinvigorate clinical practices. Although the advent of personalized patient treatment is provocative and often crucial in a pediatric environment, there is a need to assess the true potential of AI when implemented in a real, uncontrolled, and chaotic healthcare scenario. In all the studies published around this topic, the experiments were either conducted retrospectively or by experts in a controlled setting, therefore lacking ecological validity. Recent systematic reviews [32,33] analyzing AIs’ role and performance in healthcare acknowledged that AI systems or models were often evaluated under unrealistic conditions (controlled research environment) that had minimal relevance to routine clinical practice (workload, chaos, and time constraints). Therefore, there is a lack of evidence exhibiting AIs’ efficacy in a real clinical environment.
It is essential to understand that the working environment and cognitive workload are significant determinants of technology use. In a pediatric setting, clinicians are often assigned several patients with unique needs and health statuses. Given the global shortage of staff and the increasing burden on the healthcare industry, clinicians often experience burnout and fatigue. Individuals under such stress and discomfort might not be efficient in utilizing AI devices and comprehending its outcome in the same way as reported in several research articles. Therefore, studies must evaluate AI systems under a real scenario to ensure effective use when integrated into a clinical workflow.

2.2. Technology Readiness Level

Recently, several innovations around medical AI have been associated with excellent performance in the literature. However, research breakthroughs do not necessarily translate into a technology that is ready to use in a high-risk environment such as healthcare [32,33]. That said, most AIs featuring prominent abilities in research and literature, for the most part, would not be executable in a clinical environment. According to the Technology Readiness Level (TRL), most AI systems, at least in pediatric and neonatal intensive critical care (PICU and NICU), if not all, do not qualify for implementation. TRL is a gauging system developed to assess the maturity level of a particular technology [34]. TRL consists of nine categories (readiness levels), where a score of TRL 1 is the lowest, and TRL 9 is the highest (see Box 1). By applying the TRL system to the articles involving AI in pediatrics, we can observe that most published articles are prototype testing in an operational environment with near-implementation readiness (TRL 7). Few to none of the AI systems discussed in the literature have been deployed into a real ICU setting and evaluated longitudinally over a significant duration.
Box 1. Technology Readiness Levels (1–9).
Technologies with TRL 1 through 4 are executable in laboratory setting, where the main object is to conduct research. This stage is the proof of concept.
-
TRL 1: Basic principles of the technology observed
-
TRL 2: Technology concept formulated
-
TRL 3: Experimental proof of concept developed
-
TRL 4: Technology validated in a study laboratory
Technologies with TRL 5 through 7 are in the development phase, where the functional prototype is ready.
-
TRL 5: Technology validated in relevant environment (controlled setting in a real-life environment)
-
TRL 6: Technology demonstrate in relevant environment
-
TRL 7: System prototype demonstrated in operational environment
Lastly, technologies with TRL 8 and 9 are in the operational phase where the primary objective is implementation.
-
TRL 8: System completed and certified for commercial use

2.3. AI Accountability—Who Is Responsible for Technology Error?

How does the absence of AIs’ accountability impact clinicians’ intention to use the technology? This chapter explains ‘accountability’ as a process in which healthcare practitioners have potential responsibilities to justify their ‘clinical actions’ to patients (or families) and are held liable for any impending positive or negative impact on patient health. While using an AI-based decision support system, only clinicians are held accountable if they decide to follow an AI-based treatment, resulting in patient harm. Additionally, clinicians are also held responsible if they deviate from the standard protocols [1]. This may be worrisome because, under such circumstances, clinicians will only follow AI if it matches their judgment and aligns with the standard protocol—making the AI underused.
Furthermore, it might be difficult for clinicians, who are not necessarily trained in the subject, to effectively comprehend AIs’ functioning under an existing burnout state and identify any technological flaw. One way to address the problem of ‘accountability’ is by training doctors and nurses to understand when to rely upon or not on AI recommendations. However, training or educating practitioners on AI will require substantial effort. The AI accountability issue solution will require a systematic approach involving stakeholders from the law, policymakers, computer scientists, human factors researchers, healthcare organizations, healthcare practitioners, insurance agencies, and patients.

3. Recommendations and Future Steps

Concerns regarding Ecological Validity and TRL can be associated with AIs’ usability. There is a lack of studies evaluating the usability or user-centeredness of any AI technology in a pediatric setting. As acknowledged earlier in this chapter, clinicians are often overwhelmed with clinical responsibilities. Therefore, to ensure the adoption of AI in pediatrics, it is essential to develop systems that are easy to use and that fulfill pediatric nurses’ and doctors’ requirements. AI developers also need to consider the end-user of their products. Since most bedside tasks are performed by nurses, the AI system implemented at the bedside should be designed for nurses, as their digital literacy can be substantially different from other physicians or researchers (study participants) and may vary across demographics.
Future studies should include pediatric populations with multiple chronic complexities in randomized controlled trials. Current approaches to pediatric AI usually emphasize single diseases, which may have minimal relevance to a real complex scenario. Another consideration is to have an adaptive algorithm that can gauge patients’ health status and evolve over time. Therefore, future research efforts to integrate AI systems into pediatric settings need to match the measure and underlying disease trajectory to patients’ situations.
Until now, all studies have been focused on the patient. What’s missing in the literature is the use of AI to address clinicians’ concerns. Addressing clinicians’ problems can not only improve their clinical performance but also augment care quality. The pediatric unit (PICU and NICU) is one of the most critical departments within any healthcare establishment. For example, while dealing with a pediatric patient, particularly in a NICU or PICU setting, the clinicians need to consider the body size differences between every pediatric patient and consecutively be aware of all the continuous physical and cognitive development of their patients. That being said, the medication dosage (which largely depends on the body weight) might change over time for a pediatric patient (depending on their rate of physical growth). Additionally, clinicians need to have special consideration while intubating pediatric patients as they have larger tongues and a uniquely positioned epiglottis and larynx. Pediatric patients also have subtle cardiovascular differences, making heart rate a critical clinical factor. They are also prone to pathogens and neurological disorders from poisoning. In other words, pediatric patients have a very low tolerance to any error, and therefore, clinicians are required to pay for extra care and personalized treatment.
Apart from caring for patients, pediatric clinicians also have to dedicate a significant amount of time and effort to educating patients’ parents. Such a work demand often takes a heavy toll on their cognitive workload, and AI technologies can be developed to identify clinicians undergoing excessive cognitive load or burnout. Since clinicians in a burnout state are prone to human errors, identifying and providing them with timely assistance can help ensure patient safety. Identifying cognitive workload will also help the floor manager to better schedule their staff and designate appropriate resources.
Night nurses, particularly those who are new in the profession, may feel exhausted during their shifts. In a setting where nurses have to keep a continuous watch on patient monitors (a critical aspect in NICU and PICU settings), performing efficiently often becomes challenging. In such a scenario, AI, in conjunction with eye trackers, can be leveraged to measure nurses’ attention span and to identify the zone in the screen where they gaze. AI can then optimize the information being displayed on the clinical monitors to highlight the essential data in real-time.
AI technology can be used to identify and record clinicians’ behavior leading to near misses so that it can generate an alert in the future. It is essential to acknowledge that in healthcare, outcomes are reasonable because clinicians make educated and just-in-time adjustments according to the fluctuating health condition. Future work should train AI on the critical adjustments made by clinicians, so that AI can adapt in real-time in the same manner as experienced clinicians do. Please note that the views present in this article can differ from those of experts in AI and across different healthcare settings; hence it should be considered with caution.

4. Major Takeaways

  • Artificial Intelligence has great potential, but the consideration of human factors is essential for its sustainability in pediatrics.
  • The lack of AIs’ ecological validity hinders its adoption and usage in the clinical workflow.
  • The lack of AIs’ accountability can be a significant hurdle in AI acceptance among clinicians.
  • Artificial Intelligence, if used appropriately, can improve clinical workflow and, in turn, augment the quality of care.
  • All AI-based decision support systems should be exclusively designed for their end-users (doctors and nurses) to safeguard the technology as well as patient safety.

Author Contributions

Conceptualization, A.C.; Methodology, A.C. and E.U.; Investigation, A.C.; Writing—Original Draft Preparation, A.C. and E.U.; Writing—Review & Editing, A.C. and E.U.; Supervision, A.C.; Project Administration, A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Price, W.N., II; Gerke, S.; Cohen, I.G. Potential liability for physicians using artificial intelligence. JAMA 2019, 322, 1765. [Google Scholar] [CrossRef] [PubMed]
  2. FDA. Patient Engagement Advisory Committee Meeting Announcement. 2020. Available online: https://www.fda.gov/advisory-committees/advisory-committee-calendar/october-22-2020-patient-engagement-advisory-committee-meeting-announcement-10222020-10222020 (accessed on 1 May 2022).
  3. Paycha, F. Diagnosis with the aid of artificial intelligence: Demonstration of the 1st diagnostic machine. Presse Therm. Clim. 1968, 105, 22–25. [Google Scholar]
  4. Shortliffe, E.H.; Davis, R.; Axline, S.G.; Buchanan, B.G.; Green, C.C.; Cohen, S.N. Computer-based consultations in clinical therapeutics: Explanation and rule acquisition capabilities of the mycin system. Comput. Biomed. Res. 1975, 8, 303–320. [Google Scholar] [CrossRef]
  5. Nimri, R.; Battelino, T.; Laffel, L.M.; Slover, R.H.; Schatz, D.; Weinzimer, S.A.; Dovc, K.; Danne, T.; Phillip, M.; Shalitin, S.; et al. Insulin dose optimization using an automated artificial intelligence-based decision support system in youths with type 1 diabetes. Nat. Med. 2020, 26, 1380–1384. [Google Scholar] [CrossRef] [PubMed]
  6. Voss, C.; Schwartz, J.; Daniels, J.; Kline, A.; Haber, N.; Washington, P.; Tariq, Q.; Robinson, T.N.; Desai, M.; Phillips, J.M.; et al. Effect of wearable digital intervention for improving socialization in children with autism spectrum disorder: A randomized clinical trial. JAMA Pediatr. 2019, 173, 446–454. [Google Scholar] [CrossRef]
  7. Yang, R.L.; Yang, Y.L.; Wang, T.; Xu, W.Z.; Yu, G.; Yang, J.B.; Sun, Q.L.; Gu, M.S.; Li, H.B.; Zhao, D.H.; et al. Establishment of an auxiliary diagnosis system of newborn screening for inherited metabolic diseases based on artificial intelligence technology and a clinical trial. Zhonghua Er Ke Za Zhi 2021, 59, 286–293. [Google Scholar] [CrossRef]
  8. Medina, R.; Bouhaben, J.; de Ramón, I.; Cuesta, P.; Antón-Toro, L.; Pacios, J.; Quintero, J.; Ramos-Quiroga, J.A.; Maestú, F. Electrophysiological brain changes associated with cognitive improvement in a pediatric attention deficit hyperactivity disorder digital artificial intelligence-driven intervention: Randomized controlled trial. J. Med. Internet Res. 2021, 23, e25466. [Google Scholar] [CrossRef]
  9. Onu, C.C.; Udeogu, I.; Ndiomu, E.; Kengni, U.; Precup, D.; Sant’anna, G.M.; Alikor, E.A.D.; Opara, P. Ubenwa: Cry-based diagnosis of birth asphyxia. Machine Learning for Development workshop, 31st Conference on Neural Information Processing Systems. arXiv 2017, arXiv:1711.06405. [Google Scholar]
  10. Onu, C.C.; Lebensold, J.; Hamilton, W.L.; Precup, D. Neural transfer learning for cry-based diagnosis of perinatal asphyxia. 20th Annual Conference of the International Speech Communication Association INTERSPEECH. arXiv 2019, arXiv:1906.10199. [Google Scholar]
  11. Si, Y. Machine learning applications for electroencephalograph signals in epilepsy: A quick review. Acta Epileptol. 2020, 2, 5. [Google Scholar] [CrossRef]
  12. Mani, S.; Ozdas, A.; Aliferis, C.; Varol, H.A.; Chen, Q.; Carnevale, R.; Chen, Y.; Romano-Keeler, J.; Nian, H.; Weitkamp, J.-H. Medical decision support using machine learning for early detection of late-onset neonatal sepsis. J. Am. Med. Inform. Assoc. 2014, 21, 326–336. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Masino, A.J.; Harris, M.C.; Forsyth, D.; Ostapenko, S.; Srinivasan, L.; Bonafide, C.; Balamuth, F.; Schmatz, M.; Grundmeier, R.W. Machine learning models for early sepsis recognition in the neonatal intensive care unit using readily available electronic health record data. PLoS ONE 2019, 14, e0212665. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Verder, H.; Heiring, C.; Clark, H.; Sweet, D.; Jessen, T.E.; Ebbesen, F.; Björklund, L.J.; Andreasson, B.; Bender, L.; Bertelsen, A.; et al. Rapid test for lung maturity, based on spectroscopy of gastric aspirate, predicted respiratory distress syndrome with high sensitivity. Acta Paediatr. 2016, 106, 430–437. [Google Scholar] [CrossRef] [PubMed]
  15. He, L.; Li, H.; Holland, S.K.; Yuan, W.; Altaye, M.; Parikh, N.A. Early prediction of cognitive deficits in very preterm infants using functional connectome data in an artificial neural network framework. NeuroImage Clin. 2018, 18, 290–297. [Google Scholar] [CrossRef]
  16. Podda, M.; Bacciu, D.; Micheli, A.; Bellu, R.; Placidi, G.; Gagliardi, L. A machine learning approach to estimating preterm infants survival: Development of the preterm infants survival assessment (pisa) predictor. Sci. Rep. 2018, 8, 13743. [Google Scholar] [CrossRef] [Green Version]
  17. Lamping, F.; Jack, T.; Rübsamen, N.; Sasse, M.; Beerbaum, P.; Mikolajczyk, R.T.; Boehne, M.; Karch, A. Development and validation of a diagnostic model for early differentiation of sepsis and non-infectious SIRS in critically ill children - a data-driven approach using machine-learning algorithms. BMC Pediatr. 2018, 18, 112. [Google Scholar] [CrossRef]
  18. Kayhanian, S.; Young, A.M.H.; Mangla, C.; Jalloh, I.; Fernandes, H.M.; Garnett, M.R.; Hutchinson, P.J.; Agrawal, S. Modelling outcomes after paediatric brain injury with admission laboratory values: A machine-learning approach. Pediatr. Res. 2019, 86, 641–645. [Google Scholar] [CrossRef]
  19. Kim, S.Y.; Kim, S.; Cho, J.; Kim, Y.S.; Sol, I.S.; Sung, Y.; Cho, I.; Park, M.; Jang, H.; Kim, Y.H.; et al. A deep learning model for real-time mortality prediction in critically ill children. Crit. Care 2019, 23, 279. [Google Scholar] [CrossRef] [Green Version]
  20. Ruiz, V.M.; Saenz, L.; Lopez-Magallon, A.; Shields, A.; Ogoe, H.A.; Suresh, S.; Munoz, R.; Tsui, F.R. Early prediction of critical events for infants with single-ventricle physiology in critical care using routinely collected data. J. Thorac. Cardiovasc. Surg. 2019, 158, 234–243.e3. [Google Scholar] [CrossRef] [Green Version]
  21. Fraiwan, L.; Alkhodari, M. Neonatal sleep stage identification using long short-term memory learning system. Med. Biol. Eng. Comput. 2020, 58, 1383–1391. [Google Scholar] [CrossRef]
  22. Feng, J.; Lee, J.; Vesoulis, Z.A.; Li, F. Predicting mortality risk for preterm infants using deep learning models with time-series vital sign data. NPJ Digit. Med. 2021, 4, 108. [Google Scholar] [CrossRef] [PubMed]
  23. Spatharou, A.; Hieronimus, S.; Jenkins, J. Transforming healthcare with ai: The impact on the workforce and organizations. 10 March 2020. Available online: https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/transforming-healthcare-with-ai (accessed on 1 May 2022).
  24. Clark, M.M.; Hildreth, A.; Batalov, S.; Ding, Y.; Chowdhury, S.; Watkins, K.; Ellsworth, K.; Camp, B.; Kint, C.I.; Yacoubian, C.; et al. Diagnosis of genetic diseases in seriously ill children by rapid whole-genome sequencing and automated phenotyping and interpretation. Sci. Transl. Med. 2019, 11, eaat6177. [Google Scholar] [CrossRef] [PubMed]
  25. Caparros-Gonzalez, R.A.; de la Torre-Luque, A.; Diaz-Piedra, C.; Vico, F.J.; Buela-Casal, G. Listening to relaxing music improves physiological responses in premature infants: A randomized controlled trial. Adv. Neonatal Care 2018, 18, 58–69. [Google Scholar] [CrossRef] [PubMed]
  26. Ornek, A.H.; Ceylan, M.; Ervural, S. Health status detection of neonates using infrared thermography and deep convolutional neural networks. Infrared Phys. Technol. 2019, 103, 103044. [Google Scholar] [CrossRef]
  27. Matam, B.R.; Duncan, H.; Lowe, D. Machine learning based framework to predict cardiac arrests in a paediatric intensive care unit: Prediction of cardiac arrests. J. Clin. Monit. Comput. 2019, 33, 713–724. [Google Scholar] [CrossRef]
  28. Irles, C.; González-Pérez, G.; Muiños, S.C.; Macias, C.M.; Gómez, C.S.; Martínez-Zepeda, A.; González, G.C.; Servitje, E.L. Estimation of neonatal intestinal perforation associated with necrotizing enterocolitis by machine learning reveals new key factors. Int. J. Environ. Res. Public Heal. 2018, 15, 2509. [Google Scholar] [CrossRef] [Green Version]
  29. Campbell, J.P.; Ataer-Cansizoglu, E.; Bolon-Canedo, V.; Bozkurt, A.; Erdogmus, D.; Kalpathy-Cramer, J.; Patel, S.N.; Reynolds, J.D.; Horowitz, J.; Hutcheson, K.; et al. Expert diagnosis of plus disease in retinopathy of prematurity from computer-based image analysis. JAMA Ophthalmol. 2016, 134, 651–657. [Google Scholar] [CrossRef]
  30. Adegboro, C.O.; Choudhury, A.; Asan, O.; Kelly, M.M. Artificial intelligence to improve health outcomes in the nicu and picu: A systematic review. Hosp. Pediatr. 2021, 12, 93–110. [Google Scholar] [CrossRef]
  31. Liu, X.; Faes, L.; Kale, A.U.; Wagner, S.K.; Fu, D.J.; Bruynseels, A.; Mahendiran, T.; Moraes, G.; Shamdas, M.; Kern, C.; et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis. Lancet Digit. Health 2019, 1, e271–e297. [Google Scholar] [CrossRef]
  32. Choudhury, A.; Asan, O. Role of artificial intelligence in patient safety outcomes: Systematic literature review. JMIR Med. Inform. 2020, 8, e18599. [Google Scholar] [CrossRef]
  33. Choudhury, A.; Renjilian, E.; Asan, O. Use of machine learning in geriatric clinical care for chronic diseases: A systematic literature review. JAMIA Open 2020, 3, 459–471. [Google Scholar] [CrossRef] [PubMed]
  34. Straub, J. In search of technology readiness level (TRL) 10. Aerosp. Sci. Technol. 2015, 46, 312–320. [Google Scholar] [CrossRef]
Table 1. State of the art: Artificial Intelligence in PICU and NICU (not an exhaustive list).
Table 1. State of the art: Artificial Intelligence in PICU and NICU (not an exhaustive list).
StudyInstitution(s)PatientsData Source and TypeModelCompared with CliniciansConclusion
[15]Autism Brain Imaging Data Exchange Database28Research database: ImagesArtificial Neural NetworkNoThe study accurately predicted cognitive deficits/function in individual very preterm infants soon after birth. However, larger data size is required to achieve the clinical gold standard.
[16]Italian Neonatal Network23,747Research database: NumericalArtificial Neural
Network
NoThe study shows that using the only limited information available up to 5 min after birth. AI can have a significant advantage over current approaches in predicting the survival of preterm infants.
[17]German Tertiary Care PICU296EHR: NumericalRandom ForestNoThe study shows that AI can facilitate the early detection of sepsis with an accuracy superior to traditional biomarkers.
It can also potentially reduce antibiotic use by 30% in non-infectious cases.
[18]Cambridge University94EHR: NumericalSupport Vector
Machine
NoThe study shows how AI algorithms can predict severe traumatic injury outcomes at six months using just the three most informative parameters.
[19]Severance Hospital and Samsung Medical Center1723EHR: NumericalConvolutional
Neural
Network
NoThe study demonstrated that the machine learning-based model, the Pediatric Risk of Mortality Prediction Tool, can outperform the conventional Pediatric Index of Mortality scoring system in predictive ability.
[20]University Hospital EHR93EHR: NumericalNaïve Bayesian modelsYesThe study demonstrates the capability of AI models in augmenting clinicians’ ability to identify infants with single-ventricle physiology at high risk of critical events.
The study also reports that the early prediction of critical events may improve the overall care quality and minimize health care expenses.
[21]University of Pittsburgh37Research database: EEG signalsLong Short-Term Memory NoThe algorithm proposed in the study gave promising results in automatic sleep stage scoring in neonatal sleep signals.
[22]St. Louis Children’s Hospital285EHR: NumericalNovel Deep Learning ModelNoThe novel AI model developed in the study demonstrated efficacy in predicting the real-time mortality risk of preterm infants in initial NICU hospitalization. The proposed model also outperformed the existing clinical risk index II scoring system for babies
EHR = electronic health records; EEG = electroencephalogram; AUROC = area under the receiver operating characteristic curve.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Choudhury, A.; Urena, E. Artificial Intelligence in NICU and PICU: A Need for Ecological Validity, Accountability, and Human Factors. Healthcare 2022, 10, 952. https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare10050952

AMA Style

Choudhury A, Urena E. Artificial Intelligence in NICU and PICU: A Need for Ecological Validity, Accountability, and Human Factors. Healthcare. 2022; 10(5):952. https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare10050952

Chicago/Turabian Style

Choudhury, Avishek, and Estefania Urena. 2022. "Artificial Intelligence in NICU and PICU: A Need for Ecological Validity, Accountability, and Human Factors" Healthcare 10, no. 5: 952. https://0-doi-org.brum.beds.ac.uk/10.3390/healthcare10050952

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop