Next Article in Journal
Determination of “Neutral”–“Pain”, “Neutral”–“Pleasure”, and “Pleasure”–“Pain” Affective State Distances by Using AI Image Analysis of Facial Expressions
Next Article in Special Issue
Exciting of Strong Electrostatic Fields and Electromagnetic Resonators at the Plasma Boundary by a Power Electromagnetic Beam
Previous Article in Journal / Special Issue
Solving Dual-Channel Supply Chain Pricing Strategy Problem with Multi-Level Programming Based on Improved Simplified Swarm Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Explainable AI (XAI) Applied in Machine Learning for Pain Modeling: A Review

1
Department of Mechanical Engineering, Yuan Ze University, Taoyuan 32003, Taiwan
2
Department of Electronic and Computer Engineering, Brunel University London, Uxbridge UB8 3PH, UK
3
Brain Research Center, National Yang-Ming University, Taipei 112, Taiwan
4
School of Medicine, National Yang-Ming University, Taipei 112, Taiwan
5
Neurological Institute, Taipei Veterans General Hospital, Taipei 112, Taiwan
*
Authors to whom correspondence should be addressed.
Submission received: 19 May 2022 / Revised: 7 June 2022 / Accepted: 10 June 2022 / Published: 14 June 2022
(This article belongs to the Special Issue 10th Anniversary of Technologies—Recent Advances and Perspectives)

Abstract

:
Pain is a complex term that describes various sensations that create discomfort in various ways or types inside the human body. Generally, pain has consequences that range from mild to severe in different organs of the body and will depend on the way it is caused, which could be an injury, illness or medical procedures including testing, surgeries or therapies, etc. With recent advances in artificial-intelligence (AI) systems associated in biomedical and healthcare settings, the contiguity of physician, clinician and patient has shortened. AI, however, has more scope to interpret the pain associated in patients with various conditions by using any physiological or behavioral changes. Facial expressions are considered to give much information that relates with emotions and pain, so clinicians consider these changes with high importance for assessing pain. This has been achieved in recent times with different machine-learning and deep-learning models. To accentuate the future scope and importance of AI in medical field, this study reviews the explainable AI (XAI) as increased attention is given to an automatic assessment of pain. This review discusses how these approaches are applied for different pain types.

1. Introduction

Artificial intelligence (AI) has been a great opportunity for the progress of the economy, with its ability for solving problems that cannot be solved precisely in a short time using human intelligence. In recent years, the utilization of computer-assisted approaches in every domain, especially in healthcare, has increased with advancements in AI incorporated by reducing and optimizing the cost, time, workforce required for assessing, testing and completing the tasks performed by humans and increasing the quality of healthcare. However, there will be some challenges to overcome with the availability and development of clinical facilities using AI systems, equipment and trained professionals, etc. [1,2]. The availability of AI for use in healthcare and in different domains has already achieved great heights and has much influence on the present generation of mankind. Investing in AI has increased tremendously, from over 37.5 billion USD in the year 2019 to nearly 97.9 billion USD by 2023 [1,2,3].
For improving health-associated records of individual patients’ Electronic Health Records (EHR) [4], National Health Insurance (NHI) [5,6,7,8] for developing the Health Information Technology (HIT) [8], along with the deployment of some assistive tools, has made AI more reachable to people for easy access and more convenient healthcare [3,4,5,6,7,8]. However, the treatment and diagnosis of patients is a challenging task with machine-learning or AI models, as they are not necessarily sufficient in and of themselves, which is further endorsed by medical staff [9]. The artificial-intelligence approach for storing the health status of individuals in an Electronic Medical Record (EMR) gives the possibility for physicians to know the patient’s disease history, diagnosis, planned or unplanned treatments, severity and reoccurrence, medications used, test results of laboratory, etc. This information in one click makes the physician’s analysis about the patient more accurate and faster [10,11,12]. Rather than chart reviews, these data are well-organized and easy to access. The chip card, which has an electromagnetic chip inside, will render the data useful for the physician within no time once it is scanned, and give a scope to analyze the condition of the patient for diagnosis and treatment [12]. The drawback when progress is made in this is that the medical and health data are noisy, which when sampled irregularly makes it difficult to combine the data from different sources [10].
The inclusion of these advancements in technology in medicine and healthcare improves digitization and informatization [13]. Even with the boom in machine learning (ML) and AI, failure of the automated system dysfunction leads to losses of human lives. Diagnosis and treatment of patients at an early stage is key for technology utilization and minimizing the risk of the disease advancing. Some diseases require long treatments such as cancer, chronic pain, diabetes, etc. There should be accountability and transparency in medical data. Thus, the questions to be answered are: (1) who can be accountable if something goes wrong? (2) Can we explain why things are going wrong? (3) How do we leverage them if they go well? Many studies have suggested different methods and ideas that focused on interpretability, and furthermore, in explainable artificial intelligence (XAI) [14].
The explanations of AI models are more practically applied to global AI processes but should be careful while with individual decisions. There should be thorough validation before applying explainable artificial intelligence [15]. According to [15], the explanations of ML decisions have been categorized as inherent explainability and post hoc explainability. Inherent explainability means the clear, understandable and limited complex data by which the simple input and output can quantify their relation. The very simple way to understand is a calculation of a car’s fuel efficiency versus the weight of the car, using regression. It is understandable by explaining how a kilogram increase in weight changes the fuel efficiency on an average. On the other hand, post hoc explainability is where the data and models are difficult in complexity and high-dimensional to understand. This can be seen in medical-image analysis. Papers [15,16] describe examples of heat-map images for diagnosing pneumonia. The data, which contain useful and non-useful information after localizing the region, do not reveal exactly what in that area that the model considers useful. It is hard for clinicians to know if the model appropriately established that the presence of an airspace opacity was important in the decision, if the shapes of the heart border or left pulmonary artery were the deciding factor, or if the model had relied on an inhuman feature, such as a particular pixel value or texture that might have more to do with the image-acquisition process than the underlying disease [15]. XAI applied in diagnosis of some diseases is explained, such as Chronic, Ophthalmic, etc.
Interpretability is significant for AI models, by which the user knows the reason for the decision of the model as optimistic compared with the others [17]. If the data is high-dimensional [15], it leads to a lack of good explainability, and also may not create trustworthiness and transparency in usage [17]. Paper [18] explains how explainability techniques are used on a heart-disease dataset. The model created by [18] was used for the detection of explaining 13 attributes from the Heart Disease Cleveland UC Irvine dataset. Some of the feature-based techniques include Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), in which LIME is capable for local explanations and SHAP can explain globally as well as locally. Local explanations are limited to individual predictions, whereas global explanations are for the whole model, although global explanations can be used for individual predictions but are less accurate [3,18].
Retinal diseases such as glaucoma turn severe when they are untreated. In some cases, they cause irreversible vison loss. Even though there is an emergence of many deep-learning methods for use in diagnosing retinal diseases, their practical implementation is limited, with drawbacks with trust in the models in providing optimal and accurate decisions. The work reported [19] includes the quantitative analysis of the attribution methods using multiple measures, including robustness, runtime and sensitivity [19]. As explained before, the decisions are more transparent and explainable for users. These are the ethical and legal challenges for the model to be used [19] before taking decisions.
AI is also applied to the diagnosing of many other diseases that will end up with pain that lasts for a shorter period or even more after treatment ends. Pain is subjective in nature and cannot be same with two persons and for the same illness [20]. As explained in [20], pain cannot be concluded with one experiment or analysis for an individual. It needs to be observed, or experiments need to be conducted, numerous times. In olden times there was no alternative to mitigate pain rather than to accept it, but with the invention of first anesthetic drug named ether, the scenario was altered. Now, unpleasant pain is mitigated by injecting anesthetics. This was invented to be used for certain situations where tolerance of pain is not acceptable, and the patient needs to take it to reducing its severity [20]. This review focuses on pain types and XAI approaches applied in the pain model, and how helpful it can be in pain detection.

2. Scope of Review

This review is solely related to pain-scaling approaches and machine learning or deep learning, by further extending the machine-learning model’s decision of classifying by explainability. Pain is experienced due to different factors, for different ages, for different genders, differing from one another with no common standard that can be relied on. A person with different conditions of medical illness will experience pain that cannot be treated with same medication dosage [20]. For instance, patients under surgery are given anesthesia drugs, while people with headache pain are given a dosage of medicine, and for some others, pain opioids are used, which when misused or addicted to lead to other chronic complications. To allow the AI model to predict the associated pain, using XAI is discussed. Furthermore, the difficulties involved while assessing pain by clinicians and AI models and the bridge between these two is discussed.

3. Pain Measurement and Variation

The measurement of pain catches eyes as it is not reliable even today, after many advancements in science and technology. It received attention when different researchers approached it in different ways with ML and AI models, despite the outcome not being trustworthy.

3.1. Pain Measurement

The treatment of pain after the invention of anesthetic drugs (ether) became more trouble-free, and the mortality rate during surgeries reduced significantly. The treatment of postoperative pain remained unsophisticated and largely opioid-based, receiving scant attention in the literature. The invention of anesthesia in 1846 and the advancements in the anesthetic agent’s research for hundred years after it led to the foundation for the measurement of pain in 1940. There was a group of students from Connell University named James Hardy, Harold Wolff and Helen Goodell, who began working on a method to measure the intensity of pain. The study group, first assuming pain as the end of any overstimulation of recognized sensory mechanisms, found that pain had its own neurological pathways and was most likely to have its peripheral receptors and cerebral centers. They later devised a dolorimeter, a device that focused light on a blackened area of skin, and exhibited a painful stimulus at 45 °C or 113 °F [20,21,22]. This led them to devise a scoring system to record the intensity of pain experienced: “Twenty-one discriminable intensities of pain were observed between the threshold pain and the ceiling pain, a scale of pain intensity is proposed, the unit of which is called a ‘dol’.” In year 1951, the dolorimeter was used for the first time on patients in an attempt to assess the effectiveness of analgesia during labor. [21,22]. Two years later, a group of anesthetists evaluated the dolorimeter as an instrument for assessing pain, principally in pain-clinic patients. They conducted over a hundred hours of testing on themselves and reported. Their conclusion was that the dolorimeter might have an application as a tool for evaluating analgesic drugs, but felt it had little application as a measuring tool in patients [22].
The main critic of the dolorimeter is Henry Beecher, who is a Professor and Chair of the Department of Anesthesia at Massachusetts General Hospital. He insisted that pain research could have been carried out only by studying real pain in patients, taking into account all the subjective, emotional overlays that accompanied the origins of the pain. Thus, his work on the measuring of pain became extensive and it was one of the many areas where he tried to quantify subjective responses. His randomized, blinded trials involved the use of placebos—then a very new concept—and a crossover design where the patients served as their own controls, receiving two or more analgesics during a given painful episode. He measured a single response, the presence of a 50% reduction in pain. Beecher’s meticulous methodology became the foundation for future research into clinical pain management and analgesic efficacy [21,22]. Hence, pain is subjective, individually centered and usually measured by the self-report taken from the suffering person. There are a number of tools for measuring pain, which include the Visual Analogue scale (VAS), Verbal Rating scale (VRS) and some multidimensional tools. These scalings were developed for pain assessment after many experiments and clinical trials, which found them to be cost effective and robust. However, the patient’s self-report, which is the quantification of the pain experienced, is reported when talking with the clinician [20,23,24]. It can be interpreted by a clinician, although it can be taken as a standard for treating the patient even today. There is no skill involved for assessing pain [24]. Among those, the automatic detection of facial expression in particular is of high importance due to its applications in many fields such as in biometrics, forensics, medical diagnosis, monitoring, defense and surveillance, etc. It is not a continuous experience; pain varies and intensifies with time and cannot be predicted.

3.2. Pain Variation

Pain in some surgeries and injuries last long, thus making it difficult for the patient to conclude the cause. For example, pain could have been caused by other factors that are related to the musculoskeletal system. On the other hand, surgical pain is severe as it involves a loss of blood [20]. Hence, it received much attention from all the researchers to assess the pain using some AI models. There has been a lot of work carried out to automatically detect the pain from ML to deep learning (DL) [25,26,27,28,29].
In [20], the mechanism of pain is discussed elaborately; the nerve fibers A-delta and C fibers are sensitive to sharp and dull pain sensations, respectively. As pain is sensitive to the environment, distress and emotional conditions, people can experience it with no such remarks observed on the face. The influence of social, economic and cultural factors may also include people observing pain differently from one another [20,26,27,28,29,30,31,32,33,34,35].
In anesthetics, pain is the key parameter to deal with and regulate smoothly during the process of surgical operations on the patient. For this, preoperative and postoperative monitoring of pain is of high importance on different types of surgeries, including eyes, heart, brain or other organs [28]. Additionally, pain from patients sometimes cannot be verbally communicated, as they are under anesthesia for a long time, children, dementia patients, noncooperative, etc. In such cases, the above-mentioned measurement of pain using self-reports is not useful. The variation of pain with such patients cannot be dealt with easily [29,30,31,32,33,34].

4. Explainability in Pain Models

In [36], the concept of human–computer Interaction that has roots in cognitive science, particularly on the intelligence of humans and knowledge discovery/data mining in computer science with AI, is discussed. The definition of intelligence is with human intelligence in cognitive science and AI in computer science. In these two cases, the intelligence should be usable, and the factors that are needed are prior data, knowledge, generalization of data, dimensionality and its explanatory factors. In the medical sector, with pain, the raw data are available, and the problem lies with the generalization of data to different pain types and a lack of explanation factors. To date, the misuse of the interpretability and exploitability in many contexts leads to the model being perceived as ineffective by humans. The sense of a model to a human end observer is called interpretability; in other words, ‘transparency’ [37,38]. Depending on the transparency, the models can be simulated, decomposable models and algorithmically transparent models, wherein if the model is understandable to humans, it is called an explainable model [38]. Table 1 gives some explainable features related to different pain types. Chest pain in most cases is related with the heart and is also caused sometimes by problems with the lungs, esophagus, muscles, ribs or nerves. In most chest-pain cases, the doctor needs to evaluate the electrocardiogram (ECG), vital signs, past medical history of the patient (PMHx), the patient’s symptoms (Sx) and heart-rate variability (HRV), which is the peak-to-peak variation of the time interval. The explainable features mentioned in Table 1 are of high variable importance, as those are the features that the researchers are using for the detection of pain using machine-learning or deep-learning methods.
Back pain has many features, such that research implemented AI to model features such as electromyography (EMG), HRV, pelvic incidence, tilt, slope or direct tilt, etc., which is explained in Section 5.2. Shoulder pain and headache pain have common features that are used for detection using AI, which are facial images and facial landmarks. However, for headache pain, some other features such as age and visual analog scale (VAS) readings are taken into consideration to determine the cause. Surgical pain is a critical pain for the patient in operation theatre. The electroencephlogram (EEG) signal gives more relevant data for the anesthesiologists in surgical/postoperative types of pain.

4.1. Chest Pain

Physiological signs that are used for modeling pain are the only data that are trustworthy; the data from the behavioral signs will have low levels of reliability. In chest pain, there is a risk of evaluation involved of vital signs without involving any new variable [39]. Patients with chest pain at the emergency department (ED) constitute a greater logistic challenge as the majority have noncardiac-related symptoms and often benign disorders that do not need emergency treatment or hospitalization. Acute chest pain, which comes under primary cardiovascular disease, ranges from severe to no pain, from acute coronary syndromes to harmless conditions. Chest pain constitutes the most emergency department cases and is diagnosed using the HEART (history, EEG, age, risk factors, troponin) score in Table 2. Acute coronary syndrome-related mortality is more common in patients presented to the emergency department. Hence, the need of the emergency physician to assess the diagnosis of myocardial ischemia, including unstable angina, non-ST elevation myocardial infarction (NSTEMI) and ST elevation myocardial infarction (STEMI) [39,40,41,42,43]. The patient history, i.e., past medical history (PMHx) including smoking, drugs, medication, etc., and examination of the patient physically for vital-sign changes may not be reliable for the determination of treatment in emergency care by physicians [41]. Therefore, the data will be an input to the computer algorithm. The risk due to the disease can be predicted using an AI model only after the time when the patient’s health condition is stable and transferred to the general ward from the emergency department. This means the risk is less with the condition that the pain may be a chronic cause of the criticality. Chest pain records in the patient’s historical information are critical in determining the underlying cause. Treatment and clinical assessment of the patient is determined by many factors, including data such as age, history of medication and surgeries, therapies, physiological signs, etc. [42,43,44,45].

4.2. Back Pain

Lower back pain remains the most possible musculoskeletal disorder in the whole world, with the population at risk from different conditions. A common back pain can transform into more stressed chronic back pain when not diagnosed for a long time. This type is more common in occupational workers. Even physicians cannot determine the cause of chronic back pain from MR (magnetic resonance) images. Due to numerous advances in medical image processing and AI, physicians are now able to optimize the time for their diagnosis and treatment. Around 60% to 80% of the population in the UK may have experienced back pain at any time in their life [46], and among chronic diseases worldwide, one-fourth of the population suffers back pain. Advances in AI have reduced the risk by having a fast diagnosis system, with early prevention of acute back pain becoming chronic back pain. Electromyography (EMG), heart-rate variability (HRV), pelvic incidence, pelvic tilt (Figure 1), lumbar lordosis angle, sacral slope, pelvic radius, degree spondylolisthesis, pelvic slope, direct tilt, thoracic slope, cervical tilt, sacrum angle and scoliosis slope, gait features, data from pressure sensors to assess sitting posture and erector spinae muscle activity are some explainable features for back pain diagnosis using AI [46,47,48,49,50,51].
Figure 1 indicates the pelvic tilt that is a feature to describe lower back pain (LBP), where PT represents pelvic tilt, PI represents pelvic incidence and SS represents sacral slope. PT is the angle between the vertical line from the midpoint of the two hip-joint centers and a line connecting the midpoint of the two hip-joint centers with the midpoint of the sacral end plate. PI is the angle that connects the midpoints of the two hip-joint centers with the midpoint of the sacral end plate with the line perpendicular to the center of the sacral end plate. SS is the angle between the horizontal line and the sacral end plate.

4.3. Shoulder Pain

The shoulder joint has bones packed in contact with the muscles, tendons and ligaments. Usually, pain is caused due to the rotator-cuff tendons that are packed under the bony area of the shoulder. For most of the shoulder pain research, the labeled faces were taken for assessing the level of the pain as one of three classes, namely no pain, medium pain and high pain, or even more. Patients with shoulder pain underwent some physical tests on the abnormal shoulder, and appropriate levels of pain were considered from relative expressions of the face [53]. Different motion tests were conducted on patients to know the level of pain by the physiotherapist. These included passive and active motion tests that are performed when the patient is resting in a supine position on the bed and in standing position, respectively. The passive test is achieved when the patient’s limb is rotated by a physiotherapist until the maximum range was achieved or the patient feels pain and asks to stop. The active test was performed before the passive test clinically [54]. The UNBC shoulder pain database has the publicly accessible data for pain research as of today, where most research related to pain is carried out. In [55], the UNBC database is tested for the ensemble deep-learning model (EDLM) and achieved good accuracy. In [56], the facial-muscle-based action units (AU) are used to assess the pain from UNBC shoulder-pain-archive facial images. Figure 2 shows the tendon torn or tearing as an example of having shoulder pain.

4.4. Headache Pain

Headache pain refers to pain that arises due to a sensation occurring in the nerve fibers of the brain. Headache is a neurological disorder that can be due to different stimulus caused within the head. The dynamics of headache pain differs with the severity. Types of headache-pain perception varies with the person, as explained in [20,57,58]. Pain caused due to damage of tissues or organs inside the human body will also cause the sensation of headache pain. The source of pain, severity, time period, etc., gives different types of headaches [57,58,59,60]. The most common type of headache classification is primary and secondary headaches, and is explained in the International Classification of Headache Disorders (ICHD), 3rd edition [61].

4.4.1. Primary Headaches

Primary headaches are caused with no medical illness or condition involved, meaning they have no known serious cause for stimulus of pain. These include cluster headaches, migraines, tension-type headaches and new daily persistent headaches (NDPH) [60], as explained in Table 3. Cluster headaches are triggered by nitroglycerin, histamine and alcohol consumption. This is usually accompanied by eye watering, nasal congestion and swelling around the eye on the affected side, while symptoms last from 15 min to 3 h. The attacks of clusters last for weeks or months [57,58]. Migraine headache pain is on one side of the head. Migraine with aura and without aura are the two subtypes of migraine. Aura is the sensation perceived by a patient that leads to a condition affecting the brain. Migraine with aura has been associated with an increased risk of ischemic stroke, and not much risk is associated with migraine without aura. The diagnosis of migraine with or without aura is greater than or equal to 5 to 60 min and 4 to 72 h, respectively. Tension-type headache is more common, with a lifetime prevalence in the general population, and impacts the socioeconomic life of an individual. NDPH lasts for 24 h and is persistent in some days from the day it starts [57,58,59,60,61,62].

4.4.2. Secondary Headaches

Secondary headaches occur in response to other conditions that cause headache. It is classified in ICHD, which states that secondary headache occurring in close temporal relation to a disorder known to cause headache should be considered secondary unless proved otherwise. Headache worsening or improvement depends on the causative disorder that has worsened or improved. Sometimes it can be mistreated as a primary headache. Essentially, headaches last for 3 to 6 months depending on the severity of the cause [63,64,65]. As explained in [20], according to the duration of the pain and acuteness, chronic pain, it is associated with each headache pain.

4.5. Surgical/Postoperative Pain

Surgical pain has many risks that may lead to death. The diagnosis of pain is highly important when undergoing surgery or after surgical operation. To reduce the pain as explained in [20,21,22,23,66,67,68,69], general anesthesia is a safe and fundamental component for performing surgeries. Pain is monitored by anesthesiologists and a proper dosage of anesthesia is recommended according to the patients’ health, age and type of surgery. Hence, it becomes a difficult task for anesthesiologists to maintain the levels of anesthesia. The pain relates to the brain dynamics, and thus provides potential to trace differences in the brain’s activity under different anesthetics. In [66,67,68,69], work was carried out on how to access the depth of anesthesia (DoA) levels using different signals from the brain, and how to relate it with the bispectral index (BIS) value to make a more convenient and easy way for the surgical operation to continue, as the patient will experience pain if they become awake.

5. Explainable AI Models

Medical AI is used in performing clinical diagnosis and treatment suggestions. The application of DL in many biomedical fields from genomic applications such as gene expression, public medical health management and epidemic prevention have much importance. Explainable artificial intelligence (XAI) models are needed to relate the context-based explanations with the decisions made by machines in clinical decision making. Depending on the application domain, the decision made by the machines is to be explained. AI models with their broad application in all fields of science or in technology are usually skill-based decisions made from the datasets that the model is trained in or tested on, which means a clear understanding of the AI model-made decision is acceptable. In medicine, there are two distinct types of areas: one is the science of medicine, and the other is clinical medicine. Clinical medicine normally focuses on the patient at bedside. The physician’s communication with the patient is common, and the physician’s medical advice to the patient and selecting the therapy required for the diagnosing is made by explainable AI models. The need for explanations in these decisions to the patients is to be communicated effectively in an understandable way, which is possible with XAI [37,70,71,72,73,74,75,76,77,78,79,80,81,82].
XAI models have an increased interest as they gain high importance, which gives an explainable output of the machine-learning algorithm. Reinforcement learning (RL) is one such type of algorithm in machine learning that uses a goal-directed learning. RL has an agent that is used to learn by interacting with the environment for achievement of the goal. Rewards are to be returned by the environment. The reinforcement learning in healthcare domain is well-explained in [83]. Pain is a critical area in medical diagnostics, especially with the headache pain that is almost not explainable. This paper aimed at giving the view of developing an XAI model to further achieve a true, unbiased result with diagnosis and treatment. To date, there are few deep-learning models that are explainable for pain diagnosis or treatment, as it is a subjective experience and varies with time and treatment. We can say that every treatment for illness will lead somehow to pain. The patient feels pain with the given dosage of medicine or with the mentioned cause of illness. This is one of the first papers to review the features of the different pain types with machine-learning and deep-learning models that are to be explainable to make the decisions of the algorithm to be understood by the end user [70,83,84]. In future, a detailed survey can be addressed in upcoming review papers in this field. This review paper turns the interest of the researchers to focus on the pain, and gives an idea of how advancements are making the patient’s pain an understandable feature to be implemented using AI that is explainable. The variable importance of the explainable features is mentioned in Table 4. The variable importance is based on the reviewed papers that used the variable to determine the pain.

5.1. AlexNet

AlexNet is a convolutional neural network (CNN) architecture that works with large datasets. The architecture consists of eight convolutions and three fully connected layers. As there are many parameters, there is a problem of overfitting, which is solved by data-augmentation and dropout methods [71]. In [72], the AlexNet model is used in the application of detecting the brain’s computer tomography (CT) hemorrhage from normal brain CT images. The long-term problem of classifying the normal and hemorrhage CT images of the brain is solved with the AlexNet model, which extracts more features with trained filters.

5.2. VGGNet

After AlexNet, CNN gained popularity via VGG, with 16 and 19 layers being used. This also presents the best performance on ImageNet, and reduces computing speed and accuracy. The depth of the network is increased by adding convolutional layers [73]. As we know, COVID diagnosis and treatment in early stages may help patients to survive with few complications. In [74], the authors concluded that the ultrasound images provide much more reliable data and superior detection accuracy compared with X-ray at 86% and CT scan with 84%. The ultrasound images are almost 100% accurate, as it is easy to assess for the patient at bedside.

5.3. ResNet

ResNet is a deeper neural network and is difficult to train the model. This was also trained on ImageNet data and achieved higher performance than VGG as this is 8x deeper than VGG. This also worked on COCO detection and segmentation. As can be seen in [75], 50-, 101- and 152-layer-deep ResNets are used, depending on the data. The color fundus images in diabetic retinopathy are classified in [76] using inception ResNet V2 as an application of AI in the biomedical field, with high accuracies over 80%.

5.4. DenseNet

Although CNN is introduced over two decades, improvements in hardware and network structure have recently allowed the training of truly deep CNNs. After the above discussed deep CNNs, which surpassed 100 layers, creating a problem of gradient when it passes through many layers may vanish and wash out by the time it reaches the end. DenseNet layers are very narrow (i.e., 12 filters per layer) [77]. In [78], the author proposed that the DenseNet-121 model reported a more accurate patient-recognition rate (PRR), and image-recognition rate (IRR) metrics improved by 2–8% and 2–9% with the VGG 16 and ResNet50 convolutional neural network models.

6. Discussion

Deep-learning methods are widely used in many applications related to healthcare. The utilization of AI in healthcare has reduced the burden on the system exclusively. The datasets available to train models have increased in time and have achieved good results with a small amount of medical data. The investment of AI in healthcare has increased tremendously in recent years, with surgical operations also being assisted by AI systems that detect and assess the health of patients instantly with the electroencephalogram (EEG), electromyogram (EMG), pulse rate and electrocardiogram (ECG).
Pain detection using these DL methods is a difficult task, as it cannot be accurate enough for diagnosing. For research purposes, datasets that are used for pain detection are developed, but the recorded pain cannot be constant and dynamic. It is a sensation caused due to some medical disorder; hence, it is highly difficult to predict as it can be with other medical conditions. Pain varies with time and is not evaluated using one’s facial image. Different research has been conducted with some publicly available datasets and has proved that headache pain is the most difficult to predict, among others. There was little research that went on to discuss headache pain. However, we achieved certain progress in detecting other types of pain as discussed above in this paper. Incorporating XAI models may increase the features that are explainable to a certain illness and what causes it, and feature the importance in diagnosis.

7. Conclusions

The different AI-based approaches to pain are reviewed, and the importance of explainable AI in health and medicine is explained. This review gives an overview of all the pains that are automatically diagnosed using facial emotions and expressions, vital signs, and other important signs for detecting pain from the available data. There is a gap between the engineering systems in real-time diagnosis of patients with pain; this should be filled using some AI approaches. Pain from different sources such as injury, illness and tissue damage are not the same sensations as each other. Pain is so persistent that it leads to stress [20]. The meaning of intelligence in explaining the features of pain that are studied is discussed. The pain scale, which is verbal or rated by the physician, is time-consuming and not reliable, with more stress on the health system. The application of AI models for healthcare-system improvement by diagnosing without any invasive methods gives much scope nowadays to every other disease prior to diagnosis. The features explainable for the diagnosis of pain, as described for each pain in Table 1, should be concentrated on as a solution.

Author Contributions

Conceptualization, M.F.A. and J.-S.S.; methodology, R.M., M.F.A., F.-J.H., W.-T.C. and J.-S.S.; formal analysis, J.-S.S. and M.F.A.; investigation, R.M. and J.-S.S.; resources, R.M. and J.-S.S.; data curation, R.M. and J.-S.S.; writing—original draft preparation, R.M.; writing—review and editing, J.-S.S. and M.F.A.; visualization, J.-S.S. and M.F.A.; supervision, M.F.A., F.-J.H., W.-T.C. and J.-S.S.; project administration, J.-S.S.; funding acquisition, J.-S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ministry of Science and Technology (MOST) of Taiwan, grant number: MOST 110-2221-E-155-004-MY2.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the support in part by the Ministry of Science and Technology, Taiwan.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Matheny, M.; Sonoo, T.I.; Mahnoor, A.; Danielle, W. (Eds.) Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril; NAM Special Publication; National Academy of Medicine: Washington, DC, USA, 2019. [Google Scholar]
  2. Bohr, A.; Memarzadeh, K. The rise of artificial intelligence in healthcare applications. Artif. Intell. Healthc. 2020, 25–60. Available online: https://0-www-sciencedirect-com.brum.beds.ac.uk/science/article/pii/B9780128184387000022 (accessed on 1 April 2022).
  3. Aniek, F.M.; Jan, A.K.; Peter, R.R. The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. J. Biomed. Inform. 2021, 113, 103655. [Google Scholar]
  4. Rajkomar, A.; Oren, E.; Chen, K.; Dai, A.M.; Hajaj, N.; Hardt, M.; Liu, P.J.; Liu, X.; Marcus, J.; Sun, M.; et al. Scalable and accurate deep learning with electronic health records. NPJ Digit. Med. 2018, 1, 18. [Google Scholar] [CrossRef] [PubMed]
  5. Wu, T.Y.; Majeed, A.; Kuo, K.N. An overview of the healthcare system in Taiwan. Lond. J. Prim. Care 2010, 3, 115–119. [Google Scholar] [CrossRef] [Green Version]
  6. Lee, S.Y.; Chun, C.B.; Lee, Y.G.; Seo, N.K. The National Health Insurance system as one type of new typology: The case of South Korea and Taiwan. Health Policy 2008, 85, 105–113. [Google Scholar] [CrossRef]
  7. Victor, B.K.; Yang, C.T. The equality of resource allocation in health care under the National Health Insurance System in Taiwan. Health Policy 2011, 100, 203–210. [Google Scholar]
  8. Chi, C.; Lee, J.L.; Schoon, R. Assessing Health Information Technology in a National Health Care System—An Example from Taiwan. Adv. Health Care Manag. 2012, 12, 75–109. [Google Scholar]
  9. Tonekaboni, S.; Joshi, S.; McCradden, M.D.; Goldenberg, A. What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use. In Proceedings of the 4th Machine Learning for Healthcare Conference, Ann Arbor, MI, USA, 9–10 August 2019; Volume 106, pp. 359–380. [Google Scholar]
  10. Qinghan, X.; Mooi, C.C. Explainable deep learning based medical diagnostic system. Smart Health 2019, 13, 100068. [Google Scholar]
  11. Bonnie, B.D.; Jessica, L.; Jaime, L.N.; Qiana, B.; Daniel, A.; Robert, J.N. Use of Electronic Medical Records for Health Outcomes Research: A Literature Review. Med. Care Res. Rev. 2009, 66, 611–638. [Google Scholar]
  12. Lau, E.C.; Mowat, F.S.; Kelsh, M.A.; Legg, J.C.; Engel-Nitz, N.M.; Watson, H.N.; Collins, H.L.; Nordyke, R.J.; Whyte, J.L. Use of electronic medical records (EMR) for oncology outcomes research: Assessing the comparability of EMR information to patient registry and health claims data. Clin. Epidemiol. 2011, 3, 259–272. [Google Scholar] [CrossRef] [Green Version]
  13. Shuo, T.; Wenbo, Y.; Jehane, M.L.G.; Peng, W.; Wei, H.; Zhewei, Y. Smart healthcare: Making medical care more intelligent. Glob. Health J. 2019, 3, 62–65. [Google Scholar]
  14. Tjoa, E.; Guan, C. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 4793–4813. [Google Scholar] [CrossRef] [PubMed]
  15. Marzyeh, G.; Luke, O.R.; Andrew, L.B. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit. Health 2021, 3, 745–750. [Google Scholar]
  16. Rajpurkar, P.; Irvin, J.; Zhu, K.; Yang, B.; Mehta, H.; Duan, T.; Ding, D.; Bagul, A.; Langlotz, C.; Shpanskaya, K.; et al. CheXNet: Radiologist-Level Pneumonia Detection on Chest X-rays with Deep Learning. arXiv 2017, arXiv:1711.05225. [Google Scholar]
  17. Han, H.; Liu, X. The challenges of explainable AI in biomedical data science. BMC Bioinform. 2022, 22, 443–445. [Google Scholar] [CrossRef]
  18. Dave, D.; Het, N.; Smiti, S.; Pankesh, P. Explainable AI meets Healthcare: A Study on Heart Disease Dataset. arXiv 2020, arXiv:2011.03195. [Google Scholar]
  19. Singh, A.; Sengupta, S.; Mohammed, A.R.; Faruq, I.; Jayakumar, V.; Zelek, J.; Lakshminarayanan, V. What is the Optimal Attribution Method for Explainable Ophthalmic Disease Classification. In Ophthalmic Medical Image Analysis; Springer: Cham, Switzerland, 2020; Volume 12069, pp. 21–31. [Google Scholar]
  20. Chen, J.; Abbod, M.; Shieh, J.-S. Pain and Stress Detection Using Wearable Sensors and Devices—A Review. Sensors 2021, 21, 1030. [Google Scholar] [CrossRef]
  21. Myles, P.S.; Christelis, N. Measuring pain and analgesic response. Eur. J. Anaesthesiol. 2011, 28, 399–400. [Google Scholar] [CrossRef] [Green Version]
  22. Noble, B.; Clark, D.; Meldrum, M.; Ten Have, H.; Seymour, J.; Winslow, M.; Paz, S. The measurement of pain, 1945–2000. J. Pain Symptom Manag. 2005, 29, 14–21. [Google Scholar] [CrossRef]
  23. Virrey, R.A.; Liyanage, C.D.S.; Petra, M.I.B.P.H.; Abas, P.E. Visual data of facial expressions for automatic pain detection. J. Vis. Commun. Image Represent. 2019, 61, 209–217. [Google Scholar] [CrossRef]
  24. Yang, R.; Tong, S.; Bordallo, M.; Boutellaa, E.; Peng, J.; Feng, X.; Hadid, A. On pain assessment from facial videos using spatio-temporal local descriptors. In Proceedings of the 2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA), Oulu, Finland, 12–15 December 2016; pp. 1–6. [Google Scholar]
  25. Sourav, D.R.; Mrinal, K.B.; Priya, S.; Anjan, K.G. An Approach for Automatic Pain Detection through Facial Expression. Procedia Comput. Sci. 2016, 84, 99–106. [Google Scholar]
  26. Ashraf, A.B.; Lucey, S.; Cohn, J.F.; Chen, T.; Ambadar, Z.; Prkachin, K.M.; Solomon, P.E. The painful face—Pain expression recognition using active appearance models. Image Vis. Comput. 2009, 27, 1788–1796. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Ilyas, C.; Haque, M.; Rehm, M.; Nasrollahi, K.; Moeslund, T. Facial Expression Recognition for Traumatic Brain Injured Patients. In Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2018), Funchal, Portugal, 27–29 January 2018; Volume 4, pp. 522–530. [Google Scholar]
  28. McGrath, H.; Flanagan, C.; Zeng, L.; Lei, Y. Future of Artificial Intelligence in Anesthetics and Pain Management. J. Biosci. Med. 2019, 7, 111–118. [Google Scholar] [CrossRef] [Green Version]
  29. Garcia-Chimeno, Y.; Garcia-Zapirain, B.; Gomez-Beldarrain, M.; Fernandez-Ruanova, B.; Garcia-Monco, J.C. Automatic migraine classification via feature selection committee and machine learning techniques over imaging and questionnaire data. BMC Med. Inf. Decis Mak. 2017, 17, 38. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Liu, D.; Cheng, D.; Houle, T.T.; Chen, L.; Zhang, W.; Deng, H. Machine learning methods for automatic pain assessment using facial expression information: Protocol for a systematic review and meta-analysis. J. Med. 2018, 97, e13421. [Google Scholar] [CrossRef] [PubMed]
  31. Pranti, D.; Nachamai, M. Facial Pain Expression Recognition in Real-Time Videos. J. Healthc. Eng. 2018, 2018, 7961427. [Google Scholar]
  32. Lucey, P.; Cohn, J.F.; Matthews, I.; Lucey, S.; Sridharan, S.; Howlett, J.; Prkachin, K.M. Automatically Detecting Pain in Video Through Facial Action Units. IEEE Trans. Syst. Man Cybern. Part B 2011, 41, 664–674. [Google Scholar] [CrossRef]
  33. Jörn, L.; Alfred, U. Machine learning in pain research. Pain 2018, 159, 623–630. [Google Scholar]
  34. Keight, R.; Aljaaf, A.J.; Al-Jumeily, D.; Hussain, A.J.; Özge, A.; Mallucci, C. An Intelligent Systems Approach to Primary Headache Diagnosis. In Intelligent Computing Theories and Application; Springer: Cham, Switzerland, 2017; Volume 10362, pp. 61–72. [Google Scholar]
  35. Evan, C.; Angkoon, P.; Erik, S. Feature Extraction and Selection for Pain Recognition Using Peripheral Physiological Signals. Front. Neurosci. 2019, 13, 437. [Google Scholar]
  36. Rasha, M.A.-E.; Hend, A.-K.; AbdulMalik, A.-S. Deep-Learning-Based Models for Pain Recognition: A Systematic Review. Appl. Sci. 2020, 10, 5984. [Google Scholar]
  37. Holzinger, A. From Machine Learning to Explainable AI. In Proceedings of the 2018 World Symposium on Digital Intelligence for Systems and Machines (DISA), Košice, Slovakia, 23–25 August 2018; pp. 55–66. [Google Scholar]
  38. Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef] [Green Version]
  39. Liu, N.; Koh, Z.X.; Goh, J.; Lin, Z.; Haaland, B.; Ting, B.P.; Ong, M.E.H. Prediction of adverse cardiac events in emergency department patients with chest pain using machine learning for variable selection. BMC Med. Inf. Decis. Mak. 2014, 14, 75. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Six, A.J.; Backus, B.E.; Kelder, J.C. Chest pain in the emergency room: Value of the HEART score. Neth. Heart J. 2008, 16, 191–196. [Google Scholar] [CrossRef] [PubMed]
  41. Stewart, J.; Lu, J.; Goudie, A.; Bennamoun, M.; Sprivulis, P.; Sanfillipo, F.; Dwivedi, G. Applications of machine learning to undifferentiated chest pain in the emergency department: A systematic review. PLoS ONE 2021, 16, e0252612. [Google Scholar] [CrossRef]
  42. Stepinska, J.; Lettino, M.; Ahrens, I.; Bueno, H.; Garcia-Castrillo, L.; Khoury, A.; Lancellotti, P.; Mueller, C.; Muenzel, T.; Oleksiak, A.; et al. Diagnosis and risk stratification of chest pain patients in the emergency department: Focus on acute coronary syndromes. A position paper of the Acute Cardiovascular Care Association. Eur. Heart J. 2020, 9, 76–89. [Google Scholar] [CrossRef] [Green Version]
  43. Amsterdam, E.A.; Kirk, J.D.; Bluemke, D.A.; Diercks, D.; Farkouh, M.E.; Garvey, J.L.; Kontos, M.C.; McCord, J.; Miller, T.D.; Morise, A.; et al. Testing of Low-Risk Patients Presenting to the Emergency Department with Chest Pain: A scientific statement from the American Heart Association. Circulation 2010, 17, 1756–1776. [Google Scholar] [CrossRef] [Green Version]
  44. Backus, B.E.; Six, A.J.; Kelder, J.C.; Bosschaert, M.A.R.; Mast, E.G.; Mosterd, A.; Veldkamp, R.F.; Wardeh, A.J.; Tio, R.; Braam, R.; et al. A prospective validation of the HEART score for chest pain patients at the emergency department. Int. J. Cardiol. 2013, 168, 2153–2158. [Google Scholar] [CrossRef] [Green Version]
  45. Zhang, P.I.; Hsu, C.C.; Kao, Y.; Chen, C.J.; Kuo, Y.W.; Hsu, S.L.; Liu, T.L.; Lin, H.J.; Wang, J.J.; Liu, C.F.; et al. Real-time AI prediction for major adverse cardiac events in emergency department patients with chest pain. Scand. J. Trauma Resusc. Emerg. Med. 2020, 28, 93. [Google Scholar] [CrossRef]
  46. Al Kafri, A.S.; Sudirman, S.; Hussain, A.J.; Fergus, P.; Al-Jumeily, D.; Al-Jumaily, M.; Al-Askar, H. A Framework on a Computer Assisted and Systematic Methodology for Detection of Chronic Lower Back Pain Using Artificial Intelligence and Computer Graphics Technologies. Intell. Comput. Theor. Appl. 2016, 9771, 843–854. [Google Scholar]
  47. Tagliaferri, S.D.; Angelova, M.; Zhao, X.; Owen, P.J.; Miller, C.T.; Wilkin, T.; Belavy, D.L. Artificial intelligence to improve back pain outcomes and lessons learnt from clinical classification approaches: Three systematic reviews. NPJ Digit. Med. 2020, 3, 93. [Google Scholar] [CrossRef]
  48. Chen, D.; Zhang, H.; Kavitha, P.T.; Loy, F.L.; Ng, S.H.; Wang, C.; Phua, K.S.; Tjan, S.Y.; Yang, S.Y.; Guan, C. Scalp EEG-Based Pain Detection Using Convolutional Neural Network. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 274–285. [Google Scholar] [CrossRef] [PubMed]
  49. Azimi, P.; Yazdanian, T.; Benzel, E.C.; Aghaei, H.N.; Azhari, S.; Sadeghi, S.; Montazeri, A. A Review on the Use of Artificial Intelligence in Spinal Diseases. Asian Spine J. 2020, 14, 543–571. [Google Scholar] [CrossRef] [PubMed]
  50. Goldstein, P.; Ashar, Y.; Tesarz, J.; Kazgan, M.; Cetin, B.; Wager, T.D. Emerging Clinical Technology: Application of Machine Learning to Chronic Pain Assessments Based on Emotional Body Maps. Neurotherapeutics 2020, 17, 774–783. [Google Scholar] [CrossRef] [PubMed]
  51. Nitish, A. Prediction of low back pain using artificial intelligence modeling. J. Med. Artif. Intell. 2021, 4, 1–9. [Google Scholar]
  52. Abelin-Genevois, K. Sagittal Balance of the Spine. Orthop. Traumatol. Surg. Res. 2021, 107, 102769. [Google Scholar] [CrossRef]
  53. Pikulkaew, K.; Boonchieng, E.; Boonchieng, W.; Chouvatut, V. Pain Detection Using Deep Learning with Evaluation System. Proceedings of Fifth International Congress on Information and Communication Technology. Adv. Intell. Syst. Comput. 2020, 1184, 426–435. [Google Scholar]
  54. Lucey, P.; Cohn, J.F.; Prkachin, K.M.; Solomon, P.E.; Chew, S.; Matthews, I. Painful monitoring: Automatic pain monitoring using the UNBC-McMaster shoulder pain expression archive database. Image Vis. Comput. 2012, 30, 197–205. [Google Scholar] [CrossRef]
  55. Ghazal, B.; Xujuan, Z.; Ravinesh, C.D.; Jeffrey, S.; Frank, W.; Hua, W. Ensemble neural network approach detecting pain intensity from facial expressions. Artif. Intell. Med. 2020, 109, 101954. [Google Scholar]
  56. Guglielmo, M.; Zhanli, C.; Diana, J.W.; Rashid, A.; Yasemin, Y.; Çetin, A.E. Pain Detection from Facial Videos Using Two-Stage Deep Learning. In Proceedings of the 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Ottawa, ON, Canada, 11–14 November 2019; pp. 1–5. [Google Scholar]
  57. Straube, A.; Andreou, A. Primary headaches during lifespan. J. Headac. Pain 2019, 20, 35. [Google Scholar] [CrossRef] [Green Version]
  58. Sharma, T.L. Common Primary and Secondary Causes of Headache in the Elderly. Headache 2018, 58, 479–484. [Google Scholar] [CrossRef]
  59. Paul, R.; William, J.M. Headache. Am. J. Med. 2018, 131, 17–24. [Google Scholar]
  60. Yamani, N.; Olesen, J. New daily persistent headache: A systematic review on an enigmatic disorder. J. Headac. Pain 2019, 20, 80. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. HIS Classification ICHD-3. Available online: https://ichd-3.org/classification-outline/ (accessed on 18 January 2022).
  62. Hansen, J.M.; Charles, A. Differences in treatment response between migraine with aura and migraine without aura: Lessons from clinical practice and RCTs. J. Headac. Pain 2019, 20, 96. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Vij, B.; Tepper, S.J. Secondary Headaches. In Fundamentals of Pain Medicine; Springer: Berlin/Heidelberg, Germany, 2018; pp. 291–300. [Google Scholar]
  64. Keight, R.; Al-Jumeily, D.; Hussain, A.J.; Al-Jumeily, M.; Mallucci, C. Towards the discrimination of primary and secondary headache: An Intelligent Systems Approach. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 2768–2775. [Google Scholar]
  65. Sanchez-Sanchez, P.A.; García-González, J.R.; Rúa Ascar, J.M. Automatic migraine classification using artificial neural networks. F1000Research 2020, 9, 618. [Google Scholar] [CrossRef] [PubMed]
  66. Liu, Q.; Cai, J.; Fan, S.Z.; Abbod, M.F.; Shieh, J.S.; Kung, Y.; Lin, L. Spectrum Analysis of EEG Signals Using CNN to Model Patient’s Consciousness Level Based on Anesthesiologists’ Experience. IEEE Access 2019, 7, 53731–53742. [Google Scholar] [CrossRef]
  67. Liu, Q.; Ma, L.; Fan, S.Z.; Abbod, M.F.; Ai, Q.; Chen, K.; Shieh, J.S. Frontal EEG Temporal and Spectral Dynamics Similarity Analysis between Propofol and Desflurane Induced Anesthesia Using Hilbert-Huang Transform. BioMed Res. Int. 2018, 2018, 4939480. [Google Scholar] [CrossRef] [Green Version]
  68. Zi–Xiao, W.; Faiyaz, D.; Yan-Xin, L.; Shou-Zen, F.; Jiann-Shing, S. An Optimized Type-2 Self-Organizing Fuzzy Logic Controller Applied in Anesthesia for Propofol Dosing to Regulate BIS. IEEE Trans. Fuzzy Syst. 2020, 28, 1062–1072. [Google Scholar]
  69. Yi-Feng, C.; Shou-Zen, F.; Maysam, F.A.; Jiann-Shing, S.; Mingming, Z. Electroencephalogram variability analysis for monitoring depth of anesthesia. J. Neural Eng. 2021, 18, 066015. [Google Scholar]
  70. Lötsch, J.; Kringel, D.; Ultsch, A. Explainable Artificial Intelligence (XAI) in Biomedicine: Making AI Decisions Trustworthy for Physicians and Patients. BioMedInformatics 2022, 2, 1–17. [Google Scholar] [CrossRef]
  71. Alex, K.; Ilya, S.; Geoffrey, E.H. ImageNet classification with deep convolutional neural networks. Community 2017, 60, 84–90. [Google Scholar]
  72. Awwal, M.D.; Kamil, Y.; Huseyin, O. Application of Deep Learning in Neuroradiology: Brain Haemorrhage Classification Using Transfer Learning. Comput. Intell. Neurosci. 2019, 2019, 4629859. [Google Scholar]
  73. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
  74. Horry, M.J.; Chakraborty, S.; Paul, M.; Ulhaq, A.; Pradhan, B.; Saha, M.; Shukla, N. COVID-19 Detection Through Transfer Learning Using Multimodal Imaging Data. IEEE Access 2020, 8, 149808–149824. [Google Scholar] [CrossRef] [PubMed]
  75. Kaiming, H.; Xiangyu, Z.; Shaoqing, R.; Jian, S. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  76. Weijun, H.; Yan, Z.; Lijie, L. Study of the Application of Deep Convolutional Neural Networks (CNNs) in Processing Sensor Data and Biomedical Images. Sensors 2019, 19, 3584. [Google Scholar]
  77. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. IEEE Conf. Comput. Vis. Pattern Recognit. 2017, 2017, 2261–2269. [Google Scholar]
  78. Li, X.; Shen, X.; Zhou, Y.; Wang, X.; Li, T.-Q. Classification of breast cancer histopathological images using interleaved DenseNet with SENet (IDSNet). PLoS ONE 2020, 15, e0232127. [Google Scholar] [CrossRef] [PubMed]
  79. Chan, Y.K.; Chen, Y.F.; Pham, T.; Chang, W.; Hsieh, M.Y. Artificial Intelligence in Medical Applications. J. Healthc. Eng. 2018, 2018, 4827875. [Google Scholar] [CrossRef]
  80. Zemouri, R.; Zerhouni, N.; Racoceanu, D. Deep Learning in the Biomedical Applications: Recent and Future Status. Appl. Sci. 2019, 9, 1526. [Google Scholar] [CrossRef] [Green Version]
  81. Moraes, J.L.; Rocha, M.X.; Vasconcelos, G.G.; Vasconcelos Filho, J.E.; De Albuquerque, V.H.C.; Alexandria, A.R. Advances in Photopletysmography Signal Analysis for Biomedical Applications. Sensors 2018, 18, 1894. [Google Scholar] [CrossRef] [Green Version]
  82. Johnson, K.W.; Torres Soto, J.; Glicksberg, B.S.; Shameer, K.; Miotto, R.; Ali, M.; Ashley, E.; Dudley, J.T. Artificial Intelligence in Cardiology. J. Am. Coll. Cardiol. 2018, 71, 2668–2679. [Google Scholar] [CrossRef]
  83. Coronato, A.; Naeem, M.; De Pietro, G.; Paragliola, G. Reinforcement learning for intelligent healthcare applications: A survey. Artif. Intell. Med. 2020, 109, 101964. [Google Scholar] [CrossRef]
  84. Wells, L.; Bednarz, T. Explainable AI and Reinforcement Learning—A Systematic Review of Current Approaches and Trends. Front. Artif. Intell. 2021, 4, 550030. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Pelvic tilt “Adapted from Ref. [52].
Figure 1. Pelvic tilt “Adapted from Ref. [52].
Technologies 10 00074 g001
Figure 2. Tendon torn/tear.
Figure 2. Tendon torn/tear.
Technologies 10 00074 g002
Table 1. Comparison of various pain for explainability of the features.
Table 1. Comparison of various pain for explainability of the features.
Pain TypesPain-Affected OrgansAI/ML Techniques UsedExplainable Features in the Pain
Chest painHeartRandom forest (RF), support vector machine (SVM), artificial neural network (ANN), linear regression (LR), gradient boosting.ECG, Vitals, HEART Score, Troponin, Labs, Exam, PMHx, Sx, HRV
Back painBack boneK-nearest neighbor (K-NN), principal component analysis (PCA), random forest (RF), ANN, SVM, multilayer perceptron (MLP), LR, stochastic gradient boosting (SGM), naïve Bayes (NB).EMG, HRV, pelvic incidence, pelvic tilt, lumbar lordosis angle, sacral slope, direct tilt, pelvic radius, degree spondylolisthesis, pelvic slope, thoracic slope, cervical tilt, sacrum angle and scoliosis slope, gait features, data from pressure sensors to assess sitting posture, erector spine muscle activity.
Shoulder painShoulder joint/muscleSVM, ResNetFacial images, landmarks.
Headache painBrainRandom oracle model (ROM), linear neural network (LNN), SVM, K-NN, ANNAge, visual analog scale rating, duration of pain, facial images, landmarks.
Surgical/post-operative painBody cellsAlexNet, VGGNet, CifarNet, ResNet, DenseNetElectroencephalogram (EEG)
Table 2. Predicting chances of hospitalization using HEART score.
Table 2. Predicting chances of hospitalization using HEART score.
HEART Score PointsMACE OccurrenceHospitalization
0–32.5%Not Necessary
4–620.3%Necessary
≥772.7%Immediate
Table 3. Primary headache types.
Table 3. Primary headache types.
Primary Headache TypeDuration of SymptomsOccurrence
Cluster-type15 min to 3 hFrequent
Migraine with Aura≥5 min to 60 minFrequent
Migraine without Aura4 to 72 hRare
NDPH24 hRare
Table 4. Variable importance of explainable features.
Table 4. Variable importance of explainable features.
Pain TypeHigh Variable Importance FeaturesLess Variable Importance Features
Chest painECG, Vitals, HEART Score, PMHx, Sx, HRVTroponin, Labs, Exam.
Back painPelvic incidence, pelvic tilt, lumbar lordosis angle, sacral slope, direct tilt, pelvic radius, degree spondylolisthesis, pelvic slope, thoracic slope, cervical tilt, sacrum angle and scoliosis slope, gait features.EMG, HRV, data from pressure sensors to assess sitting posture, Erector spine muscle activity.
Shoulder painFacial images, landmarks.-
Headache painFacial images, landmarks.Age, visual analog scale rating, duration of pain.
Surgical/postoperative painElectroencephalogram (EEG)-
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Madanu, R.; Abbod, M.F.; Hsiao, F.-J.; Chen, W.-T.; Shieh, J.-S. Explainable AI (XAI) Applied in Machine Learning for Pain Modeling: A Review. Technologies 2022, 10, 74. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies10030074

AMA Style

Madanu R, Abbod MF, Hsiao F-J, Chen W-T, Shieh J-S. Explainable AI (XAI) Applied in Machine Learning for Pain Modeling: A Review. Technologies. 2022; 10(3):74. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies10030074

Chicago/Turabian Style

Madanu, Ravichandra, Maysam F. Abbod, Fu-Jung Hsiao, Wei-Ta Chen, and Jiann-Shing Shieh. 2022. "Explainable AI (XAI) Applied in Machine Learning for Pain Modeling: A Review" Technologies 10, no. 3: 74. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies10030074

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop