Next Article in Journal
Improved Bound of Four Moment Theorem and Its Application to Orthogonal Polynomials Associated with Laws
Previous Article in Journal
Developable Ruled Surfaces Generated by the Curvature Axis of a Curve
Previous Article in Special Issue
A Novel Weld-Seam Defect Detection Algorithm Based on the S-YOLO Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Automatic Facial Palsy Detection—From Mathematical Modeling to Deep Learning

by
Eleni Vrochidou
1,*,
Vladan Papić
2,
Theofanis Kalampokas
1 and
George A. Papakostas
1
1
MLV Research Group, Department of Computer Science, International Hellenic University, 65404 Kavala, Greece
2
Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, University of Split, 21000 Split, Croatia
*
Author to whom correspondence should be addressed.
Submission received: 27 October 2023 / Revised: 26 November 2023 / Accepted: 27 November 2023 / Published: 28 November 2023

Abstract

:
Automated solutions for medical diagnosis based on computer vision form an emerging field of science aiming to enhance diagnosis and early disease detection. The detection and quantification of facial asymmetries enable facial palsy evaluation. In this work, a detailed review of the quantification of facial palsy takes place, covering all methods ranging from traditional manual mathematical modeling to automated computer vision-based methods. Moreover, facial palsy quantification is defined in terms of facial asymmetry indices calculation for different image modalities. The aim is to introduce readers to the concept of mathematical modeling approaches for facial palsy detection and evaluation and present the process of the development of this separate application field over time. Facial landmark extraction, facial datasets, and palsy grading systems are included in this research. As a general conclusion, machine learning methods for the evaluation of facial palsy lead to limited performance due to the use of handcrafted features, combined with the scarcity of the available datasets. Deep learning methods allow the automatic learning of discriminative deep facial features, leading to comparatively higher performance accuracies. Datasets limitations, proposed solutions, and future research directions in the field are also presented.

1. Introduction

Facial palsy is a common neuromuscular disorder causing facial weakness and the disability of facial expressions [1]. Palsy patients lose control of the affected side of their face, experiencing the dropping or stiffness of muscles and disorders of taste buds. Statistics regarding facial palsy report 25 incidents annually per 100,000 people, or approximately one patient out of 60 people in their lifetime, while an average of 40,000 palsy patients are reported in the United States every year [2]. Even though palsy does not cause patients to be in physical pain, they experience phycological stress, external discomfort, and depression, since palsy affects their appearance, facial movements, feeding functions, and, thus, their daily lives [3]. Therefore, the accurate diagnosis and exact evaluation of the degree of palsy are essential for the objective assessment of the facial nerve’s function in terms of monitoring the progress or resolution of palsy. The latter could help for evaluating the therapeutic processes and designing effective treatment plans.
Traditionally, the diagnosis of facial palsy is clinically performed by specialized neurologists who force patients to perform specific facial expressions for evaluating the condition of certain face muscles. The level of palsy is assessed by evaluating the symmetry between the right and left parts of the face in terms of various scoring standards and measuring distances between facial landmarks for both sides with a simple ruler [4]. The manual and empirical evaluation of palsy are, therefore, both labor intensive and subjective. Assessment based on visual inspection makes it hard to precisely quantify the severity of palsy, and it is not feasible to track improvements between subsequent rehabilitation interventions. Moreover, assessment relies on the degree of human expertise; thus, the clinical quantification of palsy may differ between different neurologists [5]. Hence, an objective quantitative evaluation value would be useful.
Automatic inspection approaches can alleviate these disadvantages and provide more consistent and objective facial palsy diagnosis and evaluation methods, providing neurologists with an efficient decision-supporting tool [6]. The automatic quantitative evaluation of facial palsy has been a subject of research for many years. Several approaches use optical markers attached to human faces to determine the degree of palsy [7,8], as well as full-face laser scanning [9,10] or electroneurography (ENoG) and electromyography (EMG) signals. The latter approaches, although very accurate, require specialized high-cost equipment and a constrained clinical environment and presuppose physical interventions, which are obtrusive and uncomfortable. Moreover, the patients themselves cannot perform these approaches on their own to monitor their progress at home.
Recent advancements in image analysis algorithms, combined with the increasingly affordable cost of high-resolution capturing devices, resulted in the development of efficient, simple, and cost-effective vision-based techniques for medical applications, reporting impressive state-of-the-art performances [11,12,13]. The diagnosis of various diseases is greatly assisted by facial abnormalities recognition using computer vision [14,15], dynamically incorporating facial recognition into artificial intelligence (AI)-based medicine [16,17]. Automatic image-based facial palsy could accelerate the diagnosis and progress evaluation of the disease, offering a non-invasive, simple, and time- and cost-saving method that could be used by the palsy patients themselves without the presence of a human expert.
Most image-based techniques are based on hand-crafted features associated with the affected facial regions, used as training data for machine learning and deep learning models. Such facial landmarks need to be accurate and discriminative for improving the performance of the models. Facial landmarking has been a subject of extensive research over the past few years, focusing on the extraction of specific landmarks for different medical conditions inferred from different facial regions [18,19]. Yet, facial landmarks’ annotation requires domain experts. Moreover, deep learning algorithms require large public datasets of facial palsy images for training and model performance comparisons, which are not easy to acquire due to medical confidentiality issues.
To this end, the contributions of this work are summarized as follows:
  • Τhis work has collected all the necessary background information related to the detection of facial palsy. Specifically, it includes the following areas:
    a.
    The structure of typical facial assessment systems;
    b.
    Facial landmarks for palsy detection;
    c.
    Facial datasets;
    d.
    Facial palsy grading systems.
  • This work aggregately provides the mathematical formulation of all facial palsy asymmetry indices as follows:
    a.
    2D images;
    b.
    3D images;
    c.
    Videos.
  • This work reviews and compares the following mainstream techniques:
    a.
    Machine learning;
    b.
    Deep learning approaches for facial palsy detection and evaluation.
This work, to the best of our knowledge, is the first complete guide on facial palsy that integrates all the above relevant knowledge, ranging from mathematical modeling to deep learning, aiming to provide a mathematical foundation, establish baselines, and guide future research in this field. This work constitutes a literature review, providing a synthesis of the state of knowledge on the given topic.
The rest of the paper is structures as follows: Section 2 summarizes the necessary background information related to the detection of facial palsy; Section 3 includes the mathematical formulation of all facial palsy asymmetry indices; and Section 4 reviews facial palsy detection and evaluation methods. Discussion is provided in Section 5, while Section 6 concludes this paper.

2. Foundations

2.1. Overview of Typical Facial Palsy Assessment System

Image analysis methods have been widely utilized to either detect or quantify facial palsy. Therefore, based on their intended use, these techniques can be divided into (1) detection techniques, aiming to discriminate between unhealthy and healthy subjects by forming a binary classification problem, and (2) evaluation techniques, aiming to quantify the level of palsy, forming a multi-class classification problem. As a further division, two additional research sub-categories emerge: research based on static images and research based on videos. The research based on images is less comprehensive, since an image may contain less information than a video. Moreover, images need to depict specific patterns of facial movements, e.g., smile, eyebrow raising, etc., so as to identify facial paralysis. For this reason, techniques based on images may lead to less accurate results; however, they have lower demands and computational costs. The research based on videos can provide more accurate results; in this research, facial key point features are defined, and point tracking is performed to define facial asymmetries during facial movements.
For all cases, the outline of a basic system for facial palsy assessment is illustrated in Figure 1. The first preprocessing step usually includes color and geometric transformations, resizing, denoising, etc., to increase both the quality and the quantity of the image dataset. Then, facial detection is performed for the input image. Facial landmarks are extracted, constituting the key point facial symmetry features used to detect facial palsy. Finally, a classifier is trained to evaluate input images based on the discriminative power of the extracted features.
Deep learning approaches are also considered to be an alternative to handcraft feature extraction methods. In what follows, the mathematical modeling of both hand-crafted features-based and deep learning-based approaches for facial symmetry determination are presented.

2.2. Facial Landmarks

Facial palsy detection and evaluation are based on the quantification of facial symmetry/asymmetry. The first step is to determine reference points on human faces (facial landmarks) and then develop appropriate mathematical modeling methods to numerically calculate symmetry. The most crucial step for defining efficient asymmetry indices is the appropriate selection of landmark reference points, which need to be clear, reproducible, and evenly distributed of the face surface, especially where palsy affects symmetry, such as the eyes and mouth. Moreover, an ideal number of landmark points needs to be considered; a big number of points may increase the accuracy of palsy evaluation, though this would lead to greater computational costs. We note that 2D techniques focus on frontal face views. Most of the approaches for facial palsy detection rely on static images, where facial landmarks are determined and geometric features are calculated. Based on the above information, it is obvious that there are many challenges related to the determination of efficient facial landmarks. Moreover, landmark localization and extraction can be inaccurate, especially when using video sequences or in-the-wild benchmark image datasets; in these cases, landmark extraction and tracking can be rather challenging [20]. The latter is also verified by the vivid interest of researchers in this specific field of efficient facial landmark extraction, both for palsy detection and for other tasks, e.g., facial expression detection [21,22,23].
A general overview of used facial reference points is provided in Figure 2a. In this case, the 27 reference points (10 for each side and 7 in the middle vertical line, including the upper (tr) and lower (gn) centers of the hair line and gnathion, respectively; the eyebrow upper center (eup) and upper lip (lup); the lower center of the lower lip (ldo); the center of the lips (st); the upper (nasion, n) and lower (sn) points of the center of the nose; the upper (aup) and lower (ado) corners of the ear; the inner (en) and outer (ex) corners of the eye; the corner of the mouth (ch); the outmost points of the zygomatic arc (zy); the nasal wing (an) and angle of the mandible (m); and the center of the pupil (p). The latter system is mainly used in facial surgery [24].
For facial palsy detection, however, specific points are required to highlight the dominant facial characteristics that are affected. More specifically, facial landmark detection plays an important role in the computer-aided diagnosis and evaluation of facial palsy, since all machine learning methods rely on facial landmarks to learn facial symmetry, texture, and shape features from patients’ images. Moreover, palsy patients have facial asymmetry symptoms in specific face areas, such as one-sided eye narrowing and deformation of the lip contour. Therefore, specific facial landmarks can concentrate on the most affected areas, both to reduce computations and improve detection algorithms’ performance, due to the exclusion of redundant information. A well-known facial reference point system is that of the 68 points, found in the 300-W dataset [25], as illustrated in Figure 2b. The latter annotation system is also used in the Annotated Facial Landmarks for Facial Palsy (AFLFP) dataset [5], which represents the first annotated public facial landmark dataset used in facial palsy detection tasks. Variations have been proposed, and various datasets with faces are available, including indoor and outdoor face images, which have various expressions (neutral, surprise, smile, scream, squint, disgust, brow raise, etc.), along with their proposed facial landmark annotations. Figure 2c illustrates a variation in the 98 reference points. It should be noted here that all images in this work were original and made by the authors to serve the scope of the survey. The face model depicted in the figures is a render of an originally designed head sculpture model implemented in ZBrush via Maxon; lines, points, letters and numbers were added afterwards to the render of the head model created in Adobe Illustrator.

2.3. Facial Datasets

Table 1 includes information regarding a selection of the most popular facial palsy datasets. It should be noted that all datasets, apart from the AFLFP, YFP, and MEEI, only include face images of healthy subjects. AFLFP is the first annotated image dataset for facial palsy. YFP and MEEI are also recent facial palsy datasets; however, they do not specify the annotations of the facial landmark points and are used in deep learning-based approaches for the automatic quantitative evaluation of facial palsy.

2.4. Facial Palsy Grading Systems

Facial palsy diagnosis and rehabilitation require a comprehensive standard grading system to measure the extent of facial paralysis and its progression over time. A standard grading system is not yet available; however, several grading systems for evaluating facial palsy have been proposed, based on either traditional approaches, such as subjective judgements made by clinicians, or automated approaches, such as computer aided objective facial measurements using image analysis and classification [35]. Table 2 includes information regarding the most popular traditional and computer-based grading systems for facial paralysis evaluation.
The House and Brackmann (HB) facial paralysis scale is ranked first among the most commonly used grading scales in the literature. It is used for the evaluation of facial palsy at the nerve trunk and consists of five impairment grades (Grades II to VI): mild dysfunction, moderate dysfunction, moderately severe dysfunction, severe dysfunction, and total paralysis. In this scale, one more grade is assigned to normal function (Grade I). Details of the HB scale are included in Table 3.
In general, several grading methods used to evaluate the severity of facial paralysis have been introduced, each one with its own respected use case. The landmark point approach is the most used method based on the measurement of the distance of facial landmarks. The distance is compared and evaluated based on the used system. This method is time consuming and probably less accurate than computer-based approaches, since accuracy depends on the extracted data.
Computer vision methods, aided by machine/deep learning algorithms, have been employed to overcome the aforementioned problems and achieve better evaluation techniques. However, computer vision-based approaches rely heavily on the size and diversity of the existing datasets. Due to personal information restrictions around those topics, there are lots of restrictions on public dataset usage. Once the dataset is acquired, computer vision methods extract facial descriptors to build a model so that they can generalize on test data. Predictions emerge after these models have been put into classifiers, and the output can be either static (classification) or continuous (regression) depending on the solution that each method provides.

3. Facial Asymmetry Indices

Different image modalities can be employed to evaluate facial palsy. As already mentioned, the most popular approach is based on digital images and videos; however, alternatives able to provide extended information have also been introduced. The recent advances in image capturing devices, as well as their reduced costs and ease of acquisition, combined with recent advancements in image analysis algorithms, have greatly assisted computer vision-based research.
Digital imaging is widely used to assess facial morphology from 2D images. One limitation, though, is its inability to include depth measurements and, consequently, provide a more detailed anatomic facial morphology. Stereo-photogrammetry is used as an alternative to provide 3D facial surface images with accurately measured volumetric distances. Video sequences are also used for palsy detection, since they can capture temporal variation and movements, revealing motion details not perceived in static images. Yet, the nature of video imaging data remains very similar to that of digital imaging data. It should be noted here that traditional facial palsy diagnostic techniques also include magnetic resonance tomography (MRI) and computed tomography (CT) images; however, such datasets are associated with high costs. Therefore, they are scarce and not considered here. In what follows, the mathematical modeling of asymmetry indices in different image modalities is reviewed.

3.1. Asymmetry Indices on 2D Images

A face is a (relatively) symmetric structure of its right side with respect to its left side [54]. It should be noted that only 2% of faces in the general population are really symmetric. The calculation of an asymmetry index in 2D images is mainly aided by reference lines on faces, which can be either horizontal, vertical, or centers of bilateral points. Several approaches have been developed and are analyzed in the following section, while approaches that do not use any reference lines or angles are also reported. It should be noted that distances are measured in either millimeters or pixels. In general, the higher the asymmetry index, the bigger the asymmetry.

3.1.1. Distance Symmetry from a Vertical Reference Line

In this approach, the distance from the bilateral points to the reference median sagittal line is measured (Figure 3a). More specifically, firstly, the vertical reference line is defined, and then the height and distance differences from the bilateral points to the line are measured. Based on this approach, the asymmetry index (AI) [55] for a pair of bilateral facial points is calculated from the following equation:
AI = d R d L d R + d L
where d R and d L are the distances from the vertical reference line for each side of the face, i.e., right and left, respectively.
Figure 3. Measuring asymmetries in 2D images by using distances: (a) from a vertical reference line; (b) from a horizontal reference line.
Figure 3. Measuring asymmetries in 2D images by using distances: (a) from a vertical reference line; (b) from a horizontal reference line.
Axioms 12 01091 g003
Based on Equation (1), an overall landmark-specific asymmetry index ( AI o v e r a l l ) for an l-number of landmark points can be defined as the average of l Ais as follows:
AI o v e r a l l = 1 l i = 1 l d Ri d Li d Ri + d Li
An advantage of AI is that it avoids the overweighing of big distances, since it is a difference-sum ratio, ranging from zero for perfect symmetry to an ever-increasing value for increased asymmetries. A disadvantage is that since it is an absolute number, it does not denote which face point is out of symmetry when multiple landmark points are considered, as well as the face side and the direction in which the points divert from absolute symmetry.

3.1.2. Distance Symmetry from a Horizontal Reference Line

This approach considers a horizontal reference line defined by bilateral points that are hardly affected by palsy or other facial asymmetries, such as the pupils, and calculates the distance difference of other bilateral points from the reference line [56]. Distances from vertical and horizontal lines can be used in a complementary manner and combined [57]. The latter approach is illustrated in Figure 3b, with the sagittal line and the bipupillary forming the vertical y- and horizontal x-axis, respectively.
While a symmetry index, AI, has solely been defined for the vertical reference line, there is no calculation of a respective index for the horizontal line. However, their combination is studied in [57], and a symmetry index, namely z-score, is introduced; firstly, for all pairs of bilateral points, an asymmetry value is defined as the addition of their horizontal x-coordinates or the absolute difference of their vertical y-coordinates. For distinct landmarks on a reference line, only one asymmetry can be defined as the absolute value of the corresponding coordinate.
More specifically, for bilateral points, both horizontal y- and vertical x-symmetry is considered, being defined as follows:
Horizontal   y symmetry = c y c y _ b
Vertical   x symmetry = c x + c x _ b
where cy is the y-coordinate of a landmark, cy_b is the y-coordinate of its bilateral landmark, cx is the x-coordinate of a landmark, and cx_b is the x-coordinate of its bilateral landmark. For single landmark points located on reference lines, only the vertical x-symmetry value is calculated as follows:
Single   point _ Vertical   x symmetry = c x
Finally, a symmetry value of z-score is defined for each point of one face from a set of n faces as the difference between each point’s symmetry and the mean symmetry of n faces divided by their standard deviation:
Asymmetry   index _ z score = 1 n i = 1 n x i x l ¯ σ i
where xi is the value of symmetry for a single point or a pair of bilateral points for one direction estimated for one face, xl is the mean of the same value for all n faces, and σi is their standard deviation. Overall symmetries, including vertical and horizontal symmetries for pairs of points or a single point, can be computed by calculating the average z-score for all points.
The advantage of using the z-score asymmetry index is that it does not signify an absolute metric, but it considers an orientation of asymmetries, meaning that a near-zero value indicates an average symmetry with a group of faces, while negative and positive valued indicate high symmetry and asymmetry with respect to the average, respectively. A disadvantage of z-score asymmetry index is that it is a relative metric, calculated with respect to a number of faces.

3.1.3. Distance Symmetry without a Reference Line

In this approach, i sets of bilateral points are connected with a line, and the center mi of each line (Figure 4a) is defined as follows:
m i = x Ri x Li 2
where xRi and xLi are the x-coordinates of each point in the right and left part of the face, respectively. Then, the horizontal distances between these N centers are calculated. The sum of all the absolute values of the aforementioned distances between these N centers is defined as a facial asymmetry index, namely the overall horizontal facial asymmetry (FAhor) index, as follows:
FA hor = ι , j = 1 , .. , N j < i m i m j
The vertical facial asymmetry index (FAver) can be calculated in the same way, that is by measuring and summing the vertical deviation of landmark points, i.e., the sum of the differences in the horizontal locations of each facial landmark [58].
An index of central facial asymmetry (FAcen or CFA) is also defined as the sum of Equation (8) between neighboring centers. Thus, many FAcen can be derived for a single face, allowing the local investigation of symmetries on a face. Given a center of xϵX, a neighbor of x can be all centers w in X having a certain distance r from x as follows:
FA cen = { w X | d x , w < r } m i m j

3.1.4. Angle Symmetry

In this approach, horizontal lines connecting bilateral landmark points are used, and the angles between them are measured [59]. Perfect symmetry would imply parallel lines. The angle θ between two non-parallel lines with inclinations inc1 and inc2 is defined as follows:
tan θ = inc 1 inc 2 1 + inc 1 inc 2
Vertical lines are also used to evaluate mid-face asymmetries, as well as the two basic auxiliary reference lines, i.e., median sagittal and bipupillary line [60]. In the case depicted in Figure 4b, the angle formed between the connecting line of the two midpoints, which is nearly vertical, and the vertical reference line is calculated using Equation (10). Popularly used angles extend from the middle of the bipuppilary line to the corners of the mouth.
The asymmetry index based on angles (AIang) is calculated as the sum of all g angles, including both horizontal and vertical cases. In perfect symmetry, the latter sum is equal to zero, as determined using the following equation:
AI ang = i = 1 g θ i

3.2. Asymmetry Indices on 3D Images

Recently, the diagnosis and evaluation of facial deformities has also been determined using 3D imaging. The primary advantage of 3D imaging is the capability to view faces from different angles and accurately measure deformations. For this reason, a 3D facial coordinate system is established [61]. The coordinate system is determined using three planes: the yz-vertical plane defined by the median sagittal line, the xy-horizontal plane that passes from the nasion, and the xz-plane passing from the nasion; thus, the nasion is considered to be the center of the coordinate system, as illustrated in Figure 5a. Facial landmarks are determined to be the same as those in 2D images, with each one having three coordinates (x,y,z). The nasion (n) is, therefore, located in coordinates (0,0,0). Then, two different approaches can be adapted to measure asymmetry: one is based on distances, and the other is based on angles.

3.2.1. Distance Asymmetry

In order to measure a facial asymmetry index (FA3D), the distance from each landmark to the three planes is measured with respect to both face sides, i.e., left (L) and right (R), as follows:
FA 3 D = ( dx L dx R ) 2 + ( dy L dy R ) 2 + ( dz L dz R ) 2
where dx, dy, and dz are the distances of a landmark point from planes yz, xz, and xy, respectively.

3.2.2. Angle Asymmetry

Angle asymmetry can be defined using the cosine similarity of the landmark pair (left and right) vector and the normal vector of the mid-sagittal plane [62]. Considering that n is the normal vector of the midsagittal plane and p is the vector of the p i , the i-th facial landmark pair (right (R) and left (L) side) defined as follows:
p i = p Rx , i p Lx , i ,   p Ry , i p Ly , i ,   p Rz , i p Lz , i
then the angle asymmetry index (AIang3D) can be defined using the cosine similarity according to the following equation:
AI ang 3 D = cos θ = n   p i n     p i    
Values for AIang3D near to 1 indicate symmetry, while values close to 0 indicate asymmetry. Figure 5b illustrates the calculation of angle asymmetry.

3.3. Asymmetry Indices on Video

Compared to previous methods, videos can also capture facial movements, clearly pointing out the use of each facial muscle. Facial landmark pairs are defined, and the amount of movement of each pair is evaluated with respect to each facial side (left and right) (Figure 6). For evaluating the amount of movement of each landmark pair, several facial expressions, i.e., smile, anger, etc., are captured and compared to neutral expression. When changing from neutral to an expression, landmark pairs on both sides of the face should move uniformly in cases of perfect symmetry.
An additional advantage of the use of videos and the calculation of the amount of movement is the ability to quantify the degree of improvement in palsy patients over time. We consider ni to be the neutral expression and Si to be the smile expression for the i-th facial landmark. The amount of movement of the i-th landmark pair, i.e., left and right (mL,i and mR,i), is calculated as follows:
m L , i = ( s Lx , i n Lx , i ) 2 + ( s Ly , i n Ly , i ) 2 + ( s Lz , i n Lz , i ) 2
m R , i = ( s Rx , i n Rx , i ) 2 + ( s Ry , i n Ry , i ) 2 + ( s Rz , i n Rz , i ) 2
The asymmetry movement index (AImove,i) for the i-th landmark pair is then defined as follows:
AI move ,   i = m L , i m R , i  

4. Machine Learning-Based Facial Palsy Detection and Evaluation

Traditional mathematical modeling was complemented by the advances in machine learning methods. In this section, the machine learning and deep learning approaches detailed in the bibliography for facial palsy detection and evaluation are reviewed to extract comparative information and draw a conclusion regarding the most efficient approaches.
Traditional machine learning methods are based on encoding facial palsy with facial asymmetry-related mathematical features. A portable automatic diagnosis system based on a smartphone application for classifying subjects to healthy or palsy patients was presented by Kim et al. [63]. Facial landmarks were extracted, and an asymmetry index was computed. Classification was implemented using Linear Discriminant Analysis (LDA) combined with Support Vector Machines (SVMs), resulting in 88.9% classification accuracy. Wang et al. [64] used Active Shape Models (ASMs) to locate facial landmarks, dividing the face in eight regions and Local Binary Patterns (LBPs) used to extract descriptors for recognizing patterns of facial movements in these regions, reaching the highest recognition rate of up to 93.33%. In [65], He et al. extracted features based on LBPs in the spatial–temporal domain in both facial regions and validated their method using biomedical videos, reporting an overall accuracy of up to 94% for the HB grading. In [51], the authors automatically measure the ability of palsy patients to smile using Active Appearance Models (AAMs) for feature extraction and facial expression synthesis, providing an average accuracy of 87%. McGrenary et al. [66] quantified facial asymmetry in videos using an artificial neural network (ANN).
Early research into facial asymmetry analysis was also studied by Quan et al. [67], who presented a method for automatically detecting and quantifying facial dysfunctions based on 3D face scans. The authors extracted a number of feature points that enabled the segmentation of faces in local regions, enabling specific asymmetry evaluation for regions of interest rather than the entire face. Gaber et al. [68] proposed an evaluation system for seven palsy categories based on an ensemble learning SVM classifier, reporting an accuracy of 96.8%. The authors proved that their proposed classifier was robust and stable, even for different training and testing samples. Zhuang et al. [69] implemented a performance evaluation between various feature extraction techniques and concluded that 2D static images with Histogram of Oriented Gradients (HOG) features tend to be more accurate. The authors proposed a framework in which landmark and HOG features were extracted, Principal Component Analysis (PCA) was employed separately to the features, and the results were used as inputs to an SVM classifier for classification into three classes, demonstrating performance of up to 92.2% for the entire face. The same research group, as shown in [70], demonstrated a video classification detection tool, namely the Facial Deficit Identification Tool for Videos (F-DIT-V), exploiting HOG features to find a 92.9% classification accuracy. Arora et al. [71] tested an SVM and a Logistic Regressor on generated facial landmark features, achieving 76.87% average accuracy with SVM. In [72], laser speckle contrast imaging was employed by Jiang et al. to monitor the facial blood flow of palsy patients. Then, faces were segmented into regions based on blood distribution features, and three HB score classifiers were tested for their classification performance: a neural network (NN), an SVM, and a k-NN, achieving an accuracy of up to 97.14%. A set of four classifiers (multi-layer perceptron (MLP), SVM, k-NN, multinomial logistic regression (MNLR)) was also comparatively tested in [73]. The authors explored regional information, extracting handcrafted features only in certain face areas of interest. Experimental results reported up to 95.61% correct facial palsy detection and 95.58% correct facial palsy assessment in three categories (healthy, slight palsy, and strong palsy).
All previous methods are based on hand-crafted features. Deep learning methods can automatically learn discriminative feature from the data, without the need to compute them in advance. Deep learning models have accomplished state-of-the-art performances in the field of medical imaging [74]. Based on the above, most of the recent works in vision-based facial palsy detection and evaluation employ deep features. Storey and Jiang [75] presented a unified multitask convolutional neural network (CNN) for the simultaneous object proposal, detection and asymmetry analysis of faces. Sajid et al. [76] introduced a CNN to classify palsy into five scales, resulting in a 92.6% average classification accuracy. Xia et al. [5] suggested a deep neural network (DNN) to detect facial landmarks in palsy. Hsu et al. [33] proposed a deep hierarchical network (DHN) to quantify facial palsy, including a YOLO2 detector for face detection, a fused neural architecture (line segment network—LSN) to detect facial landmarks, and an object detector, similar to Darknet, to locate palsy regions. Preliminary results of the same method were published in [77]. Guo et al. [78] investigated the unilateral peripheral facial paralysis classification using GoogLeNet, reaching a classification accuracy of up to 91.25% for predicting the HB degree.
Storey et al. [79] implemented a facial grading system from video sequences based on a 3D CNN model using ResNet as the backbone, reporting a palsy classification accuracy of up to 82%. Barrios Dell’Olio and Sra [80] proposed a CNN for detecting muscle activation and intensity in the users of their mobile augmented reality mirror therapy system. In [81], Tan et al. introduced a facial palsy assessment method, including a facial landmark detector, a feature extractor based on EfficientNet backbone and semi-supervised extreme learning to classify features, reporting an 85.5% accuracy. Abayomi-Alli et al. [82] trained a SqueezeNet network with augmented images and used the activations from the final convolutional layer as features to train a multiclass error-corrected output code SVM (ECOC-SVM) classifier, reporting an up to 99.34% mean classification accuracy. In [83], computed tomography (CT) images were used to train two geometric deep learning models, namely PointNet++ and PointCNN, for the facial part segmentation of healthy and palsy patients for facial monitoring and rehabilitation. Umirzakova et al. [84] suggested a light deep learning model for analyzing facial symmetry, using a foreground attention block for enhanced local feature extraction and a depth-map estimator to provide more accurate segmentation results. Table 4 summarizes basic information from all the aforementioned studies, including the followed methodology, dataset, and performance results. Details regarding the mathematical modeling of machine learning and deep learning classification models can be found in [85,86,87,88,89].

5. Discussion

From the information included in Table 4, useful conclusions can be drawn. The lack of available datasets designated for palsy detection and evaluation is obvious. Most research teams develop their own private sets to test their algorithms. The most used public dataset among the referenced works is the YFP dataset; however, it refers to a limited video dataset. The videos are converted into image sequences; however, low dysfunctions cannot be easily visible from only one image and, thus, a sequence of frames needs to be examined to draw conclusions. Moreover, the dataset is labeled but facial landmark points are not annotated. From Table 4, it can be observed that deep learning methods lead to better performance results compared to machine learning methods or methods relying on hand-crafted features.
It is obvious that the classification of facial palsy remains an important and challenging research topic in healthcare informatics. Representative facial feature learning of the many, diverse data that encode face symmetry mathematical modeling is essential for machine learning methods that depend on previous knowledge to extract features. Deep learning methods do not rely on hand-crafted features. However, it should be noted that most of the related works included in the literature use hand-crafted features instead of deep learning methods for the reason discussed in the following paragraphs. Yet, the selection of the appropriate method is not one-way; instead, it is based on the problem’s diversity, the number of available data, the desired accuracy, and the execution time of the algorithm, i.e., the problem under study.
On one hand, hand-crafted features used in machine learning models are efficient at classifying facial palsy; however, they are dependent on previous knowledge, and, thus, non-highly representative features may deteriorate the accuracy of classification. The major advantage of using landmark features is that they can provide exact measurements of deformed faces by calculating the distance between facial landmark points and comparing both facial sides, which is not feasible via deep learning models. For this reason, landmark-based geometrical feature learning is preferable and extensively used in maxillofacial surgery [90,91]. Moreover, geometrical feature extraction, combined with machine learning models, is less computational complex compared to deep learning approaches. An additional limitation lies in the exhausting identification and labeling of the level of palsy by domain experts, such as experienced neurologists. Yet, the interpretation of the level of palsy could be subjective and, therefore, may differ among different neurologists.
On the other hand, deep learning models are capable of automatically drawing conclusions for discriminative features and, thus, achieving a higher classification accuracy, as seen in Table 4. Computational costs involved in the process due to the great number of parameters related to deep architectures, compared to machine learning methods, remain a barrier. The interpretation of deep learning algorithms, due to their ‘black box’ nature, is an additional challenge. Despite the high reported accuracies of deep learning in medical image applications, their utilization in clinical practices remains limited [5]. The latter issue mainly occurs because deep models require a large set of training data, which is not available in medical cases such as facial palsy, leading to poor generalization and overfitting issues. It should be noted here that among the available public datasets, only one dataset, namely AFLFP, provides annotated facial landmark images of facial palsy patients. Researchers tend to use small and private in-house clinical data for their experimentations, obstructing the comparative evaluation of different algorithms using the same data, which lack sufficient sampling diversity, especially for large-scale applications.
Transfer learning, dropout, and data augmentation are some common solutions to overcome overfitting [76], as indicated in Table 4 in the Conclusions/Limitations column. Currently, a popular solution to enhance available facial palsy public datasets in terms of size and variability is to generate synthetic images. Several techniques are proposed in the literature, either based on image analysis, such as combining the left and right sides of the face from two photographs of the same subject performing different facial expressions to generate a new asymmetrical facial expression [51] or simple image transform approaches [82], as well as using generative adversarial networks (GANs) to automatically augment the training data with diverse face images [76,92]. In the same direction, researchers are also starting to focus on building mathematical models of the dynamics of facial expressions for realistic synthetic faces for the modeling of static images and animations [93].

6. Conclusions

The early and precise diagnosis, quantification, and process of rehabilitation of facial palsy are of great interest for neurologists and data scientists, aiming to meliorate human well-being in everyday life. Recently, facial palsy has tended to attract computer scientists for developing automated solutions as support tools for neurologists, providing a faster and more accurate way of performing facial palsy assessment. Traditional visual inspection and manual assessment of palsy are reinforced through the advancements in machine learning and deep learning algorithms. These advancements can enable the precise localization of face landmark points, consistent calculation of asymmetry indices, and building of a universal grading system for facial palsy.
This work, for the first time, includes all relevant aspects involved in facial palsy detection, ranging from mathematical modeling to deep learning approaches. It provides a complete guide on the mathematical modeling of facial palsy in terms of asymmetry indices in three different image modalities. Moreover, it provides popular facial databases that can be used in relevant research and available grading systems, and it reviews traditional machine learning and deep learning approaches for facial palsy evaluation. It can be concluded that deep learning models could be more accurate at classifying palsy; however, they require increased computations and large training datasets. Efficient landmark extraction and selection techniques, extended research into the feasibility and suitability of appropriate models for the problem under study, and the generation of appropriate labeled datasets, both natural and artificial, remain challenging.

Author Contributions

Conceptualization, G.A.P. and E.V.; methodology, G.A.P. and E.V.; investigation, E.V. and T.K.; resources, E.V. and T.K.; data curation, E.V.; writing—original draft preparation, E.V., G.A.P. and V.P.; writing—review and editing, E.V., G.A.P., T.K. and V.P.; supervision, G.A.P.; project administration, G.A.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Van Veen, M.M.; ten Hoope, B.W.T.; Bruins, T.E.; Stewart, R.E.; Werker, P.M.N.; Dijkstra, P.U. Therapists’ Perceptions and Attitudes in Facial Palsy Rehabilitation Therapy: A Mixed Methods Study. Physiother. Theory Pract. 2022, 38, 2062–2072. [Google Scholar] [CrossRef] [PubMed]
  2. Banita, B.; Tanwar, P. A Tour Toward the Development of Various Techniques for Paralysis Detection Using Image Processing. In Lecture Notes in Computational Vision and Biomechanics; Springer: Berlin/Heidelberg, Germany, 2018; pp. 187–214. [Google Scholar]
  3. Hotton, M.; Huggons, E.; Hamlet, C.; Shore, D.; Johnson, D.; Norris, J.H.; Kilcoyne, S.; Dalton, L. The Psychosocial Impact of Facial Palsy: A Systematic Review. Br. J. Health Psychol. 2020, 25, 695–727. [Google Scholar] [CrossRef] [PubMed]
  4. McKernon, S.; House, A.D.; Balmer, C. Facial Palsy: Aetiology, Diagnosis and Management. Dent. Update 2019, 46, 565–572. [Google Scholar] [CrossRef]
  5. Xia, Y.; Nduka, C.; Yap Kannan, R.; Pescarini, E.; Enrique Berner, J.; Yu, H. AFLFP: A Database With Annotated Facial Landmarks for Facial Palsy. IEEE Trans. Comput. Soc. Syst. 2023, 10, 1975–1985. [Google Scholar] [CrossRef]
  6. Guo, Z.; Dan, G.; Xiang, J.; Wang, J.; Yang, W.; Ding, H.; Deussen, O.; Zhou, Y. An Unobtrusive Computerized Assessment Framework for Unilateral Peripheral Facial Paralysis. IEEE J. Biomed. Health Inform. 2018, 22, 835–841. [Google Scholar] [CrossRef] [PubMed]
  7. Demeco, A.; Marotta, N.; Moggio, L.; Pino, I.; Marinaro, C.; Barletta, M.; Petraroli, A.; Palumbo, A.; Ammendolia, A. Quantitative Analysis of Movements in Facial Nerve Palsy with Surface Electromyography and Kinematic Analysis. J. Electromyogr. Kinesiol. 2021, 56, 102485. [Google Scholar] [CrossRef] [PubMed]
  8. Baude, M.; Hutin, E.; Gracies, J.-M. A Bidimensional System of Facial Movement Analysis Conception and Reliability in Adults. Biomed. Res. Int. 2015, 2015, 1–8. [Google Scholar] [CrossRef] [PubMed]
  9. Petrides, G.; Clark, J.R.; Low, H.; Lovell, N.; Eviston, T.J. Three-Dimensional Scanners for Soft-Tissue Facial Assessment in Clinical Practice. J. Plast. Reconstr. Aesthetic Surg. 2021, 74, 605–614. [Google Scholar] [CrossRef]
  10. Azuma, T.; Fuchigami, T.; Nakamura, K.; Kondo, E.; Sato, G.; Kitamura, Y.; Takeda, N. New Method to Evaluate Sequelae of Static Facial Asymmetry in Patients with Facial Palsy Using Three-Dimensional Scanning Analysis. Auris Nasus Larynx 2022, 49, 755–761. [Google Scholar] [CrossRef]
  11. Amsalam, A.S.; Al-Naji, A.; Yahya Daeef, A.; Chahl, J. Computer Vision System for Facial Palsy Detection. J. Tech. 2023, 5, 44–51. [Google Scholar] [CrossRef]
  12. Lou, J.; Yu, H.; Wang, F.-Y. A Review on Automated Facial Nerve Function Assessment from Visual Face Capture. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 488–497. [Google Scholar] [CrossRef]
  13. Boochoon, K.; Mottaghi, A.; Aziz, A.; Pepper, J.-P. Deep Learning for the Assessment of Facial Nerve Palsy: Opportunities and Challenges. Facial Plast. Surg. 2023, 39, 508–511. [Google Scholar] [CrossRef]
  14. Meintjes, E.M.; Douglas, T.S.; Martinez, F.; Vaughan, C.L.; Adams, L.P.; Stekhoven, A.; Viljoen, D. A Stereo-Photogrammetric Method to Measure the Facial Dysmorphology of Children in the Diagnosis of Fetal Alcohol Syndrome. Med. Eng. Phys. 2002, 24, 683–689. [Google Scholar] [CrossRef]
  15. Wachtman, G.S.; Cohn, J.F.; VanSwearingen, J.M.; Manders, E.K. Automated Tracking of Facial Features in Patients with Facial Neuromuscular Dysfunction. Plast. Reconstr. Surg. 2001, 107, 1124–1133. [Google Scholar] [CrossRef] [PubMed]
  16. Rajpurkar, P.; Chen, E.; Banerjee, O.; Topol, E.J. AI in Health and Medicine. Nat. Med. 2022, 28, 31–38. [Google Scholar] [CrossRef] [PubMed]
  17. Wen, Z.; Huang, H. The Potential for Artificial Intelligence in Healthcare. J. Commer. Biotechnol. 2023, 27, 217–224. [Google Scholar] [CrossRef]
  18. Deng, W.; Fang, Y.; Xu, Z.; Hu, J. Facial Landmark Localization by Enhanced Convolutional Neural Network. Neurocomputing 2018, 273, 222–229. [Google Scholar] [CrossRef]
  19. Tang, X.; Guo, F.; Shen, J.; Du, T. Facial Landmark Detection by Semi-Supervised Deep Learning. Neurocomputing 2018, 297, 22–32. [Google Scholar] [CrossRef]
  20. Chrysos, G.G.; Antonakos, E.; Snape, P.; Asthana, A.; Zafeiriou, S. A Comprehensive Performance Evaluation of Deformable Face Tracking “In-the-Wild”. Int. J. Comput. Vis. 2018, 126, 198–232. [Google Scholar] [CrossRef]
  21. Peng, T.; Li, M.; Chen, F.; Xu, Y.; Zhang, D. Learning Efficient Facial Landmark Model for Human Attractiveness Analysis. Pattern Recognit. 2023, 138, 109370. [Google Scholar] [CrossRef]
  22. Huang, Y.; Huang, H. Stacked Attention Hourglass Network Based Robust Facial Landmark Detection. Neural Netw. 2023, 157, 323–335. [Google Scholar] [CrossRef] [PubMed]
  23. Bakkialakshmi, V.S.; Sudalaimuthu, T.; Umamaheswari, B. Emo-Spots: Detection and Analysis of Emotional Attributes through Bio-Inspired Facial Landmarks. In Lecture Notes in Electrical Engineering; Springer: Berlin/Heidelberg, Germany, 2023; pp. 103–115. ISBN 9789811981357. [Google Scholar]
  24. Berlin, N.F.; Berssenbrügge, P.; Runte, C.; Wermker, K.; Jung, S.; Kleinheinz, J.; Dirksen, D. Quantification of Facial Asymmetry by 2D Analysis—A Comparison of Recent Approaches. J. Cranio-Maxillofac. Surg. 2014, 42, 265–271. [Google Scholar] [CrossRef] [PubMed]
  25. Sagonas, C.; Tzimiropoulos, G.; Zafeiriou, S.; Pantic, M. 300 Faces In-the-Wild Challenge: The First Facial Landmark Localization Challenge. In Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 1–8 December 2013; pp. 397–403. [Google Scholar]
  26. Belhumeur, P.N.; Jacobs, D.W.; Kriegman, D.J.; Kumar, N. Localizing Parts of Faces Using a Consensus of Exemplars. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2930–2940. [Google Scholar] [CrossRef] [PubMed]
  27. Le, V.; Brandt, J.; Lin, Z.; Bourdev, L.; Huang, T.S. Interactive Facial Feature Localization. In Lecture Notes in Computer Science; Including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics; Springer: Berlin/Heidelberg, Germany, 2012; pp. 679–692. ISBN 9783642337116. [Google Scholar]
  28. Zhu, X.; Ramanan, D. Face Detection, Pose Estimation, and Landmark Localization in the Wild. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2879–2886. [Google Scholar]
  29. Kostinger, M.; Wohlhart, P.; Roth, P.M.; Bischof, H. Annotated Facial Landmarks in the Wild: A Large-Scale, Real-World Database for Facial Landmark Localization. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 2144–2151. [Google Scholar]
  30. Messer, K.; Matas, J.; Kittler, J.; Luettin, J.; Maitre, G. XM2VTSDB: The Extended M2VTS Database. Proc. Second Int. Conf. Audio Video-Based Biom. Pers. Authentication 1999, 964, 965–966. [Google Scholar]
  31. Gross, R.; Matthews, I.; Cohn, J.; Kanade, T.; Baker, S. Multi-PIE. Image Vis. Comput. 2010, 28, 807–813. [Google Scholar] [CrossRef] [PubMed]
  32. Matuszewski, B.J.; Quan, W.; Shark, L.-K.; McLoughlin, A.S.; Lightbody, C.E.; Emsley, H.C.A.; Watkins, C.L. Hi4D-ADSIP 3-D Dynamic Facial Articulation Database. Image Vis. Comput. 2012, 30, 713–727. [Google Scholar] [CrossRef]
  33. Hsu, G.-S.J.; Kang, J.-H.; Huang, W.-F. Deep Hierarchical Network With Line Segment Learning for Quantitative Analysis of Facial Palsy. IEEE Access 2019, 7, 4833–4842. [Google Scholar] [CrossRef]
  34. Greene, J.J.; Guarin, D.L.; Tavares, J.; Fortier, E.; Robinson, M.; Dusseldorp, J.; Quatela, O.; Jowett, N.; Hadlock, T. The Spectrum of Facial Palsy: The MEEI Facial Palsy Photo and Video Standard Set. Laryngoscope 2020, 130, 32–37. [Google Scholar] [CrossRef] [PubMed]
  35. Samsudin, W.S.W.; Sundaraj, K. Evaluation and Grading Systems of Facial Paralysis for Facial Rehabilitation. J. Phys. Ther. Sci. 2013, 25, 515–519. [Google Scholar] [CrossRef]
  36. Botman, J.W.M.; Jongkees, L.B.W. The Result of Intratemporal Treatment of Facial Palsy. ORL 1955, 17, 80–100. [Google Scholar] [CrossRef]
  37. Peitersen, E. Bell’s Palsy: The Spontaneous Course of 2500 Peripheral Facial Nerve Palsies of Different Etiologies. Acta Otolaryngol. 2002, 122, 4–30. [Google Scholar] [CrossRef]
  38. Smith, I.M.; Murray, J.A.M.; Cull, R.E.; Slattery, J. A Comparison of Facial Grading Systems. Clin. Otolaryngol. 1992, 17, 303–307. [Google Scholar] [CrossRef] [PubMed]
  39. Adour, K.K.; Swanson, P.J. Facial Paralysis in 403 Consecutive Patients: Emphasis on Treatment Response in Patients with Bell’s Palsy. Trans. Am. Acad. Ophthalmol. Otolaryngol. 1971, 75, 1284–1301. [Google Scholar] [PubMed]
  40. Janssen, F.P. Over de Postoperatieve Facialis Verlamming. Ph.D. Thesis, University of Amsterdam, Amsterdam, The Netherlands, Verlag nicht ermittelbar. 1963. [Google Scholar]
  41. Yanagihara, N. Grading of Facial Palsy. In Proceedings of the Third International Symposium on Facial Nerve Surgery, Zurich, Switzerland, 9–12 August 1976; pp. 533–535. [Google Scholar]
  42. Stennert, E. Facial Nerve Paralysis Scoring System. In Proceedings of the Third International Symposium on Facial Nerve Surgery, Zurich, Switzerland, 9–12 August 1976; pp. 543–547. [Google Scholar]
  43. House, J.W.; Brackmann, D.E. Facial Nerve Grading System. Otolaryngol. Neck Surg. 1985, 93, 146–147. [Google Scholar] [CrossRef]
  44. Burres, S.; Fisch, U. The Comparison of Facial Grading Systems. Arch. Otolaryngol. Head Neck Surg. 1986, 112, 755–758. [Google Scholar] [CrossRef] [PubMed]
  45. Murty, G.E.; Diver, J.P.; Kelly, P.J.; O’Donoghue, G.M.; Bradley, P.J. The Nottingham System: Objective Assessment of Facial Nerve Function in the Clinic. Otolaryngol. Neck Surg. 1994, 110, 156–161. [Google Scholar] [CrossRef]
  46. Berg, T.; Jonsson, L.; Engström, M. Agreement between the Sunnybrook, House-Brackmann, and Yanagihara Facial Nerve Grading Systems in Bell’s Palsy. Otol. Neurotol. 2004, 25, 1020–1026. [Google Scholar] [CrossRef] [PubMed]
  47. Satoh, Y.; Kanzaki, J.; Yoshihara, S. A Comparison and Conversion Table of ‘the House–Brackmann Facial Nerve Grading System’ and ‘the Yanagihara Grading System’. Auris Nasus Larynx 2000, 27, 207–212. [Google Scholar] [CrossRef]
  48. Kecskés, G.; Jóri, J.; O’Reilly, B. Current Diagnostic, Pharmaceutics and Reconstructive Surgical Methods in the Management of Facial Nerve Palsy. Ph.D. Thesis, University of Szeged, Szeged, Hungary, 2012. [Google Scholar]
  49. Johnson, P.; Brown, H.; Kuzon, W. Simultaneous Quantification of Facial Movements: The Maximal Static Response Assays of Facial Nerve Function. Ann. Plast. Surg. 1994, 32, 171–179. [Google Scholar] [CrossRef]
  50. Rogers, C.R.; Schmidt, K.L.; VanSwearingen, J.M.; Cohn, J.F.; Wachtman, G.S.; Manders, E.K.; Deleyiannis, F.W.B. Automated Facial Image Analysis. Ann. Plast. Surg. 2007, 58, 39–47. [Google Scholar] [CrossRef]
  51. Delannoy, J.R.; Ward, T.E. A Preliminary Investigation into the Use of Machine Vision Techniques for Automating Facial Paralysis Rehabilitation Therapy. In Proceedings of the IET Irish Signals and Systems Conference (ISSC 2010), Cork, Ireland, 23–24 June 2010; pp. 228–232. [Google Scholar]
  52. Kecskés, G.; Jóri, J.; O’Reilly, B.F.; Viharos, L.; Rovó, L. Clinical Assessment of a New Computerised Objective Method of Measuring Facial Palsy. Clin. Otolaryngol. 2011, 36, 313–319. [Google Scholar] [CrossRef]
  53. Anguraj, K.; Kandiban, R.; Jayakumar, K.S. Facial Paralysis Diseases Level Detection Using CEM Algorithm for Clinical Applications. Eur. J. Sci. Res. 2012, 77, 543–548. [Google Scholar]
  54. Penke, L.; Bates, T.C.; Gow, A.J.; Pattie, A.; Starr, J.M.; Jones, B.C.; Perrett, D.I.; Deary, I.J. Symmetric Faces Are a Sign of Successful Cognitive Aging. Evol. Hum. Behav. 2009, 30, 429–437. [Google Scholar] [CrossRef]
  55. Nakamura, T.; Okamoto, K.; Maruyama, T. Facial Asymmetry in Patients with Cervicobrachial Pain and Headache. J. Oral Rehabil. 2008, 28, 1009–1014. [Google Scholar] [CrossRef]
  56. Gosla-Reddy, S.; Nagy, K.; Mommaerts, M.Y.; Reddy, R.R.; Bronkhorst, E.M.; Prasad, R.; Kuijpers-Jagtman, A.M.; Bergé, S.J. Primary Septoplasty in the Repair of Unilateral Complete Cleft Lip and Palate. Plast. Reconstr. Surg. 2011, 127, 761–767. [Google Scholar] [CrossRef] [PubMed]
  57. Bashour, M. An Objective System for Measuring Facial Attractiveness. Plast. Reconstr. Surg. 2006, 118, 757–774. [Google Scholar] [CrossRef]
  58. Scheib, J.E.; Gangestad, S.W.; Thornhill, R. Facial Attractiveness, Symmetry and Cues of Good Genes. Proc. R. Soc. London. Ser. B Biol. Sci. 1999, 266, 1913–1917. [Google Scholar] [CrossRef]
  59. Yamashita, Y.; Nakamura, Y.; Shimada, T.; Nomura, Y.; Hirashita, A. Asymmetry of the Lips of Orthognathic Surgery Patients. Am. J. Orthod. Dentofac. Orthop. 2009, 136, 559–563. [Google Scholar] [CrossRef]
  60. Yu, C.-C.; Bergeron, L.; Lin, C.-H.; Chu, Y.-M.; Chen, Y.-R. Single-Splint Technique in Orthognathic Surgery: Intraoperative Checkpoints to Control Facial Symmetry. Plast. Reconstr. Surg. 2009, 124, 879–886. [Google Scholar] [CrossRef] [PubMed]
  61. Huang, C.S.; Liu, X.Q.; Chen, Y.R. Facial Asymmetry Index in Normal Young Adults. Orthod. Craniofac. Res. 2013, 16, 97–104. [Google Scholar] [CrossRef]
  62. Kim, J.; Jeong, H.; Cho, J.; Pak, C.; Oh, T.S.; Hong, J.P.; Kwon, S.; Yoo, J. Numerical Approach to Facial Palsy Using a Novel Registration Method with 3D Facial Landmark. Sensors 2022, 22, 6636. [Google Scholar] [CrossRef]
  63. Kim, H.; Kim, S.; Kim, Y.; Park, K. A Smartphone-Based Automatic Diagnosis System for Facial Nerve Palsy. Sensors 2015, 15, 26756–26768. [Google Scholar] [CrossRef]
  64. Wang, T.; Dong, J.; Sun, X.; Zhang, S.; Wang, S. Automatic Recognition of Facial Movement for Paralyzed Face. Biomed. Mater. Eng. 2014, 24, 2751–2760. [Google Scholar] [CrossRef]
  65. He, S.; Soraghan, J.J.; O’Reilly, B.F.; Xing, D. Quantitative Analysis of Facial Paralysis Using Local Binary Patterns in Biomedical Videos. IEEE Trans. Biomed. Eng. 2009, 56, 1864–1870. [Google Scholar] [CrossRef]
  66. McGrenary, S.; O’Reilly, B.F.; Soraghan, J.J. Objective Grading of Facial Paralysis Using Artificial Intelligence Analysis of Video Data. In Proceedings of the 18th IEEE Symposium on Computer-Based Medical Systems (CBMS’05), Dublin, Ireland, 23–24 June 2005; pp. 587–592. [Google Scholar]
  67. Quan, W.; Matuszewski, B.J.; Shark, L.-K. Facial Asymmetry Analysis Based on 3-D Dynamic Scans. In Proceedings of the 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Seoul, Republic of Korea, 14–17 October 2012; pp. 2676–2681. [Google Scholar]
  68. Gaber, A.; Taher, M.F.; Wahed, M.A.; Shalaby, N.M.; Gaber, S. Classification of Facial Paralysis Based on Machine Learning Techniques. Biomed. Eng. Online 2022, 21, 65. [Google Scholar] [CrossRef] [PubMed]
  69. Zhuang, Y.; McDonald, M.; Uribe, O.; Yin, X.; Parikh, D.; Southerland, A.M.; Rohde, G.K. Facial Weakness Analysis and Quantification of Static Images. IEEE J. Biomed. Health Inform. 2020, 24, 2260–2267. [Google Scholar] [CrossRef] [PubMed]
  70. Zhuang, Y.; Uribe, O.; McDonald, M.; Yin, X.; Parikh, D.; Southerland, A.; Rohde, G. F-DIT-V: An Automated Video Classification Tool for Facial Weakness Detection. In Proceedings of the 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Chicago, IL, USA, 19–22 May 2019; pp. 1–4. [Google Scholar]
  71. Arora, A.; Sinha, A.; Bhansali, K.; Goel, R.; Sharma, I.; Jayal, A. SVM and Logistic Regression for Facial Palsy Detection Utilizing Facial Landmark Features. In Proceedings of the 2022 Fourteenth International Conference on Contemporary Computing, Noida, India, 4–6 August 2022; ACM: New York, NY, USA; pp. 43–48. [Google Scholar]
  72. Jiang, C.; Wu, J.; Zhong, W.; Wei, M.; Tong, J.; Yu, H.; Wang, L. Automatic Facial Paralysis Assessment via Computational Image Analysis. J. Healthc. Eng. 2020, 2020, 1–10. [Google Scholar] [CrossRef] [PubMed]
  73. Parra-Dominguez, G.S.; Garcia-Capulin, C.H.; Sanchez-Yanez, R.E. Automatic Facial Palsy Diagnosis as a Classification Problem Using Regional Information Extracted from a Photograph. Diagnostics 2022, 12, 1528. [Google Scholar] [CrossRef] [PubMed]
  74. Zhang, Y.; Gorriz, J.M.; Dong, Z. Deep Learning in Medical Image Analysis. J. Imaging 2021, 7, 74. [Google Scholar] [CrossRef]
  75. Storey, G.; Jiang, R. Face Symmetry Analysis Using a Unified Multi-Task CNN for Medical Applications. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2019; pp. 451–463. [Google Scholar]
  76. Sajid, M.; Shafique, T.; Baig, M.; Riaz, I.; Amin, S.; Manzoor, S. Automatic Grading of Palsy Using Asymmetrical Facial Features: A Study Complemented by New Solutions. Symmetry 2018, 10, 242. [Google Scholar] [CrossRef]
  77. Hsu, G.-S.J.; Huang, W.-F.; Kang, J.-H. Hierarchical Network for Facial Palsy Detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 693–6936. [Google Scholar]
  78. Guo, Z.; Shen, M.; Duan, L.; Zhou, Y.; Xiang, J.; Ding, H.; Chen, S.; Deussen, O.; Dan, G. Deep Assessment Process: Objective Assessment Process for Unilateral Peripheral Facial Paralysis via Deep Convolutional Neural Network. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, Australia, 18–21 April 2017; pp. 135–138. [Google Scholar]
  79. Storey, G.; Jiang, R.; Keogh, S.; Bouridane, A.; Li, C.-T. 3DPalsyNet: A Facial Palsy Grading and Motion Recognition Framework Using Fully 3D Convolutional Neural Networks. IEEE Access 2019, 7, 121655–121664. [Google Scholar] [CrossRef]
  80. Barrios Dell’Olio, G.; Sra, M. FaraPy: An Augmented Reality Feedback System for Facial Paralysis Using Action Unit Intensity Estimation. In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology, Online, 10–14 October 2021; ACM: New York, NY, USA, 2021; pp. 1027–1038. [Google Scholar]
  81. Tan, X.; Yang, J.; Cao, J. Facial Nerve Paralysis Assessment Based on Regularized Correntropy Criterion SSELM vc and Cascade CNN. In Proceedings of the 2021 55th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 31 October–3 November 2021; pp. 1043–1047. [Google Scholar]
  82. Abayomi-Alli, O.O.; Damaševičius, R.; Maskeliūnas, R.; Misra, S. Few-Shot Learning with a Novel Voronoi Tessellation-Based Image Augmentation Method for Facial Palsy Detection. Electronics 2021, 10, 978. [Google Scholar] [CrossRef]
  83. Nguyen, D.-P.; Berg, P.; Debbabi, B.; Nguyen, T.-N.; Tran, V.-D.; Nguyen, H.-Q.; Dakpé, S.; Dao, T.-T. Automatic Part Segmentation of Facial Anatomies Using Geometric Deep Learning toward a Computer-Aided Facial Rehabilitation. Eng. Appl. Artif. Intell. 2023, 119, 105832. [Google Scholar] [CrossRef]
  84. Umirzakova, S.; Ahmad, S.; Mardieva, S.; Muksimova, S.; Whangbo, T.K. Deep Learning-Driven Diagnosis: A Multi-Task Approach for Segmenting Stroke and Bell’s Palsy. Pattern Recognit. 2023, 144, 109866. [Google Scholar] [CrossRef]
  85. Bensoussan, A.; Li, Y.; Nguyen, D.P.C.; Tran, M.-B.; Yam, S.C.P.; Zhou, X. Machine Learning and Control Theory. In Handbook of Numerical Analysis; Elsevier: Amsterdam, The Netherlands, 2022; pp. 531–558. ISBN 9780323850599. [Google Scholar]
  86. Sukumaran, A.; Abraham, A. Automated Detection and Classification of Meningioma Tumor from MR Images Using Sea Lion Optimization and Deep Learning Models. Axioms 2021, 11, 15. [Google Scholar] [CrossRef]
  87. Berner, J.; Grohs, P.; Kutyniok, G.; Petersen, P. The Modern Mathematics of Deep Learning. In Mathematical Aspects of Deep Learning; Cambridge University Press: Cambridge, UK, 2022; pp. 1–111. [Google Scholar]
  88. Dutta, N.; Subramaniam, U.; Padmanaban, S. Mathematical Models of Classification Algorithm of Machine Learning. In Proceedings of the International Meeting on Advanced Technologies in Energy and Electrical Engineering, Tunis, Tunisia, 28–29 November 2019; Hamad bin Khalifa University Press (HBKU Press): Doha, Qatar, 2020. [Google Scholar]
  89. Pedrammehr, S.; Hejazian, M.; Chalak Qazani, M.R.; Parvaz, H.; Pakzad, S.; Ettefagh, M.M.; Suhail, A.H. Machine Learning-Based Modelling and Meta-Heuristic-Based Optimization of Specific Tool Wear and Surface Roughness in the Milling Process. Axioms 2022, 11, 430. [Google Scholar] [CrossRef]
  90. Ma, Q.; Kobayashi, E.; Fan, B.; Nakagawa, K.; Sakuma, I.; Masamune, K.; Suenaga, H. Automatic 3D Landmarking Model Using Patch-based Deep Neural Networks for CT Image of Oral and Maxillofacial Surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2020, 16, e2093. [Google Scholar] [CrossRef] [PubMed]
  91. Li, J.; Erdt, M.; Janoos, F.; Chang, T.; Egger, J. Medical Image Segmentation in Oral-Maxillofacial Surgery. In Computer-Aided Oral and Maxillofacial Surgery; Elsevier: Amsterdam, The Netherlands, 2021; pp. 1–27. ISBN 9780128232996. [Google Scholar]
  92. Zhang, S.; Wang, T.; Peng, Y.; Dong, J. A Hierarchically Trained Generative Network for Robust Facial Symmetrization. Technol. Health Care 2019, 27, 217–227. [Google Scholar] [CrossRef] [PubMed]
  93. Pourebadi, M.; Riek, L.D. Facial Expression Modeling and Synthesis for Patient Simulator Systems: Past, Present, and Future. ACM Trans. Comput. Healthc. 2022, 3, 1–32. [Google Scholar] [CrossRef]
Figure 1. Typical framework of facial palsy assessment methods.
Figure 1. Typical framework of facial palsy assessment methods.
Axioms 12 01091 g001
Figure 2. Reference facial point systems: (a) general overview of facial reference points (27 points); (b) 68-point reference system; (c) 98-point reference system. Noted letters and numbers indicate the facial landmark reference points of each depicted system, emphasizing the outline of the face and key facial features, such as the eyes, eyebrows, mouth, and nose.
Figure 2. Reference facial point systems: (a) general overview of facial reference points (27 points); (b) 68-point reference system; (c) 98-point reference system. Noted letters and numbers indicate the facial landmark reference points of each depicted system, emphasizing the outline of the face and key facial features, such as the eyes, eyebrows, mouth, and nose.
Axioms 12 01091 g002
Figure 4. Measuring asymmetries in 2D images: (a) with distances and without reference lines; (b) using angles.
Figure 4. Measuring asymmetries in 2D images: (a) with distances and without reference lines; (b) using angles.
Axioms 12 01091 g004
Figure 5. Measuring asymmetries in 3D images using the following metrics: (a) distances; (b) angles.
Figure 5. Measuring asymmetries in 3D images using the following metrics: (a) distances; (b) angles.
Axioms 12 01091 g005
Figure 6. Determination of facial landmark pair movement while being neutral and smiling.
Figure 6. Determination of facial landmark pair movement while being neutral and smiling.
Axioms 12 01091 g006
Table 1. Basic details of popular facial datasets.
Table 1. Basic details of popular facial datasets.
Dataset [Ref.]InformationFrontal Facial Landmark Points
Labeled Face Parts in-the-wild (LFPW) [26]A total of 1287 annotated images from Flickr, Google, and Yahoo.35
HELEN [27]A total of 2330 annotated images from Flickr.194
Annotated Faces in-the-wild (AFW) [28]A total of 205 annotated images, including 468 faces.6
Annotated Facial Landmarks in-the-wild (AFLW) [29] A total of 25,000 annotated images from Flickr.21
300 Faces in-the-Wild (300-W) [25]A total of 600 annotated images captured in
unconstrained settings.
68
Extended Multi Modal Verification for Teleservices and Security applications (XM2VTS) [30]A total of 30 h of annotated video recordings of 295 subjects, including frontal and profile views.22
CMU MultiPIE [31]A total of 750,000 images of 337 people from different viewpoints, expressions, and illuminations. A small subset annotated.68
Annotated Facial Landmarks for Facial Palsy (AFLFP) [5]Annotated video images of 16 facial expressions of 88 subjects (palsy patients and healthy individuals).68
Hi4D-ADSIP 3D dynamic
facial articulation database [32]
Three-dimensional dynamic facial articulation dataset, with scans with high temporal and spatial resolutions, containing 3360 facial sequences captured from 80 healthy volunteers.84
YouTube Facial Palsy (YPF) [33] A total of 32 videos of 22 patients acquired from YouTube and labeled by clinic experts.-
Massachusetts Eye and Ear Infirmary (MEEI) [34]Images and videos of patients with flaccid and non-flaccid
facial palsy
-
Table 2. Facial paralysis grading systems.
Table 2. Facial paralysis grading systems.
Grading system, Year [Ref.]InformationTraditional (T)/Computer
Vision-Based (CV)
Botman and Jongkees, 1955 [36]Five-category gross scale from 0 (normal) to IV (total paralysis).T
Peitersen, 2002 [37]Five-degree gross grading system from I (no palsy) to VI
(complete palsy).
T
Smith, 1980 [38]Unweighted regional system of four categories (entire face, forehead, eye function, and mouth function) leading to scores from 0 (no function) to 4 (normal).T
Adour and Swanson, 1971 [39]Weighted system that measures the percentage of divisions of the facial nerve (frontal, eye, mouth).T
Janssen, 1963 [40]Weighted scale expressed in percentage, considering four
categories (repose, forehead, eye closure, oral branch).
T
Yanagihara, 1977 [41]Unweighted system of 10 graded areas of facial functions.T
Stennert, 1977 [42]Specific regional negative scale for motor function and secondary defects, ranging from 0 (normal) to −10 (total palsy).T
House and Brackmann (HB), 1983 [43]Six category scale from I (normal) to VI (no movement).T
Burres and Fisch, 1986 [44]Distance measurement of facial landmarks while resting and
between five expressions of the affected side and normal side.
T
Nottingham, 1994 [45]Measuring the movement of four landmarks in three
facial expressions.
T
Yanagihara, 2004 [46]Unweighted scores of 10 facial expressions, scoring from 0
(complete paralysis) to 40 (normal).
T
Synnybook (SFGS), 2000 [47]Measures three components of resting symmetry, voluntary
movement, and synkinesis, scoring up to 100 (normal).
T
Stennert-Limberg (SLFS),
2012 [48]
Sum of scores of the normal side in rest in four face regions, with scores of motility evaluations while moving.T
Maximum Static Response Array (MSRA), 1994 [49]The measurement of displacement from a standard model during facial expressions.CV
Automated Face Image
Analysis (AFIA), 2007 [50]
Tracking the movements of the lips.CV
Jane and Tomas, 2010 [51]Implementing an HB scoring system to measure symmetric smile.CV
Glasgow Facial Palsy Scale, 2012 [52]HB system combined with regional grades.CV
CEM Algorithm, 2012 [53]Mouth parameter analysis using distances from the center of the nose to the edges of the mouth.CV
Table 3. HB facial paralysis grading scale.
Table 3. HB facial paralysis grading scale.
HB ScaleInformation
I—normalNormal facial function.
II—mild dysfunctionOverall: small weakness noticed on close inspection.
At rest: normal symmetry.
In motion: moderate-to-good function in forehead, complete eye closure with no effort, and slight mouth asymmetry.
III—moderate dysfunctionOverall: clear non-disfiguring variation between the two sides.
At rest: normal symmetry.
In motion: slight-to-moderate movement in forehead, complete eye closure with effort, and weak mouth with utmost effort.
IV—moderately severe dysfunctionOverall: clear weakness and asymmetry.
At rest: normal symmetry and tone.
In motion: none in forehead, incomplete eyes closure, and mouth asymmetry with utmost effort.
V—severe dysfunctionOverall: hardly noticeable motion.
At rest: asymmetry.
In motion: none in forehead, incomplete eye closure, and minor mouth movement.
VI—total paralysisNo movement
Table 4. Methodologies for facial palsy (FP) detection.
Table 4. Methodologies for facial palsy (FP) detection.
Ref.Objective Methodology Dataset PerformanceConclusions/Limitations
[63]Smartphone-based FP diagnostic system (five FP grades) Linear regression model for facial landmark detection and SVM with linear kernel for classification Private dataset of 36 subjects (23 noral−13 palsy patients) performing 3 motions 88.9% classification accuracyReproducibility under different experimental conditions, as well as repeatability of measurements over a period of time, were not implemented
[64]Facial movement patterns recognition for FP (2 classes, i.e., normal and asymmetric)Active Shape Models plus Local Binary Patterns (ASMLBP) for feature extraction and SVM for classificationPrivate dataset of 570 images of 57 subjects with 5 facial movements Up to 93.33% recognition rateHigh robustness and accuracy
[65]Quantitative evaluation of FP (HB scale)Multiresolution extension of uniform LBP and SVM for FP evaluationPrivate dataset of 197 subject videos with 5 facial movements~94% classification accuracySensitive to out-plane facial movements, with significant natural bilateral asymmetry
[51]Facial landmarks tracking and feedback for FP assessment (HB scale)Active Appearance Models (AAMs) for facial expression synthesisPrivate dataset of frontal images of neutral and smile expressions from 5 healthy subjects87% accuracyPreliminary results to demonstrate a proof of concept
[66]FP assessmentANNPrivate dataset of 43 videos from 14 subjects1.6% average MSEPilot study; general results follow the opinions of experts
[67]Facial asymmetry measurementMeasuring 3D asymmetry indexThree-dimensional dynamic scans from Hi4D-ADSIP database (stroke)-Extraction of 3D feature points, as well as potential for detecting facial dysfunctions
[68]FP classification of real-time facial animation units (seven FP grades)Ensemble learning SVM classifierPrivate dataset of 375 records from 13 patients and 1650 records from 50 control subjects96.8% accuracy
88.9% sensitivity
99% specificity
Data augmentation for the imbalanced dataset issues
[69]FP quantificationCombination of landmarks and intensity
HoG-based features and a CNN model for classification
Private dataset of 125 images of left facial weakness, 126 images of right facial weakness, and 186 images of normal subjectsUp to 94.5% accuracyThe
combination of landmarks and HoG intensity features produced the best, when compared to either landmarks or intensity features separately
[70]FP classification (three classes)HOG features and a voting classifierPrivate dataset of 37 videos of left weakness, 38 of right and 60 of normal subjects92.9% accuracy
93.6% precision
92.8% recall
94.2% specificity
Comparison with other methods revealed the reliability of HOG features
[71]Facial metric calculation of face sides symmetryFacial landmark features with cascade regression and SVM Stroke faces dataset of 1024 images and 1081 images of healthy faces76.87% accuracyMachine learning problem-specific models can lead to improved performances
[72]FP assessment (HB scale)Laser speckle contrast imaging and NN classifiersPrivate dataset of 80 FP patients97.14% accuracyOutperforms the state-of-the-art systems and other classifiers
[73]FP classification (three classes)Regional handcrafted features and four classifiers (MLP, SVM, k-NN, MNLR) YouTube Facial Palsy (YFP) databaseUp to 95.58% correct classificationSeverity is higher classified in eyes and mouth regions
[75]Face symmetry analysis (symmetrical-asymmetrical)Unified multi-task
CNN
AFLW database to fine tune the model and extended Cohn–Kanade (CK+) to learn face symmetry (18,786 images in total)-Lack of fully annotated training set, as well as the need for labeling or a synthesized training set
[76]FP classification (five grades)CNN (VGG-16)Dataset from online sources augmented to 2000 images92.6% accuracy
92.91% precision
93.14% sensitivity
93% F1 Score
Deep features combined with data augmentation can lead to robust classification
[5]FP classificationFCNAFLFP datasetNormalized mean error (NME): 11.5% Mean average: 2.3% standard deviationComparative results indicate that deep learning methods are, overall, better than machine learning methods
[33]Quantitative analysis of FPDeep Hierarchical NetworkYouTube Facial Palsy (YFP) database5.83% NMELine segment learning
leads to an important part of deep features being able to improve the accuracy of facial landmark and palsy region detection
[77]Quantitative analysis of FPHierarchical Detection NetworkYouTube Facial Palsy
(YFP) database
Up to 93% precision and 88% recallEfficient for video-to-description diagnosis
[78]Unilateral peripheral FP assessment (HB scale)Deep CNNPrivate dataset of 720 labeled images of four facial expressions91.25% classification accuracyFine-tuning deep CNNs can learn specific representations from biomedical images
[79]FP gradingFully 3D CNNPrivate FP dataset of 696 sequences with 17 subjects82% classification accuracyVery competent at learning spatio-temporal features
[80]AR system for FP estimationLight-Weight Facial Activation Unit model (LW-FAU)Private dataset from 20 subjects-Lack of FP benchmark models and datasets
[81]FP assessment (six classes)FNPARCELM-CCNN methodYouTube Facial Palsy
(YFP) database
85.5% accuracySemi-supervised methods can distinguish different degrees of FP, even with little-labeled data
[82]FP detection and classificationDeep feature extraction with SqueezeNet and ECOC-SVM classifier YouTube Facial Palsy
(YFP) database
99.34% accuracyImprovement in FP detection from a small dataset
[83]Part segmentation Point-Net++ and PointCNNCT images of 33 subjects99.19% accuracy
89.09% IOU
Geometric deep learning can be efficient
[84]FP asymmetry analysisProposed deep architectureYouTube Facial Palsy
(YFP) database
93.8% IOUPoor with bearded faces due to a lack of such training data images
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vrochidou, E.; Papić, V.; Kalampokas, T.; Papakostas, G.A. Automatic Facial Palsy Detection—From Mathematical Modeling to Deep Learning. Axioms 2023, 12, 1091. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms12121091

AMA Style

Vrochidou E, Papić V, Kalampokas T, Papakostas GA. Automatic Facial Palsy Detection—From Mathematical Modeling to Deep Learning. Axioms. 2023; 12(12):1091. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms12121091

Chicago/Turabian Style

Vrochidou, Eleni, Vladan Papić, Theofanis Kalampokas, and George A. Papakostas. 2023. "Automatic Facial Palsy Detection—From Mathematical Modeling to Deep Learning" Axioms 12, no. 12: 1091. https://0-doi-org.brum.beds.ac.uk/10.3390/axioms12121091

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop