Next Article in Journal
Design of the Crawler Units: Toward the Development of a Novel Hybrid Platform for Infrastructure Inspection
Next Article in Special Issue
Discrimination, Bias, Fairness, and Trustworthy AI
Previous Article in Journal
A Database for Tsunamis and Meteotsunamis in the Adriatic Sea
Previous Article in Special Issue
A Comprehensive Survey for Deep-Learning-Based Abnormality Detection in Smart Grids with Multimodal Image Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Performance of Artificial Intelligence Models Designed for Diagnosis, Treatment Planning and Predicting Prognosis of Orthognathic Surgery (OGS)—A Scoping Review

by
Sanjeev B. Khanagar
1,2,*,
Khalid Alfouzan
2,3,
Mohammed Awawdeh
1,2,
Lubna Alkadi
2,3,
Farraj Albalawi
1,2 and
Maryam A. Alghilan
2,3
1
Preventive Dental Science Department, College of Dentistry, King Saud Bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
2
King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
3
Restorative and Prosthetic Dental Sciences Department, College of Dentistry, King Saud Bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
*
Author to whom correspondence should be addressed.
Submission received: 1 April 2022 / Revised: 28 May 2022 / Accepted: 29 May 2022 / Published: 31 May 2022
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)

Abstract

:
The technological advancements in the field of medical science have led to an escalation in the development of artificial intelligence (AI) applications, which are being extensively used in health sciences. This scoping review aims to outline the application and performance of artificial intelligence models used for diagnosing, treatment planning and predicting the prognosis of orthognathic surgery (OGS). Data for this paper was searched through renowned electronic databases such as PubMed, Google Scholar, Scopus, Web of science, Embase and Cochrane for articles related to the research topic that have been published between January 2000 and February 2022. Eighteen articles that met the eligibility criteria were critically analyzed based on QUADAS-2 guidelines and the certainty of evidence of the included studies was assessed using the GRADE approach. AI has been applied for predicting the post-operative facial profiles and facial symmetry, deciding on the need for OGS, predicting perioperative blood loss, planning OGS, segmentation of maxillofacial structures for OGS, and differential diagnosis of OGS. AI models have proven to be efficient and have outperformed the conventional methods. These models are reported to be reliable and reproducible, hence they can be very useful for less experienced practitioners in clinical decision making and in achieving better clinical outcomes.

1. Introduction

Facial appearance and attractiveness can influence how an individual’s esthetic appearance is evaluated and perceived. The social well-being of an individual is negatively affected by the presence of facial deformities and dentofacial irregularities, which can result in the devaluation of the individual’s own facial esthetics. Hence, orofacial appearance is considered a critical dimension, contributing to the individuals’ oral health related quality of life [1].
Facial symmetry and facial profile, among a variety of other factors, have been found to impact the facial appearance of an individual [2,3]. It can present as a reflection of underlying skeletal deformities that may require correction to attain an ideal dental occlusion and ultimately enhance the esthetic appearance. This often cannot be achieved solely through orthodontic treatment and may require orthognathic surgery (OGS) [4].
OGS is mainly performed to rectify the underlying deformities related to the jaws, with an ultimate therapeutic goal to improve functions and enhance facial esthetics. It is a complex irreversible procedure that has a permanent effect on the patient. Therefore, making the correct decisions related to case diagnosis, surgical indications, the need for pre-operative extractions, and accurately predicting the potential facial morphology, symmetry, and attractiveness is crucial. Traditionally, planning for OGS has been primarily dependent on the clinical expertise and experience of the orthodontist and oral and maxillofacial surgeon involved. In that approach, clinical decision making is achieved through clinical assessment, and the utilization of several diagnostic aids such as cephalometric analysis and study models, which sometimes may not suffice [5]. The direction and method of OGS is also determined through identification of a surgical treatment objective and visual treatment objective.
For a successful OGS, a precise preoperative planning is of utmost importance. The conventional two-dimensional (2D) surgical planning of OGS comprises of radiographs and manual model surgery, which has certain limitations when it comes to cases with severe facial asymmetry. When these 2D plans are executed, there are chances of bony collision in the ramus area, discrepancy in the pitch roll and yaw rotation and difference in the midline [6,7]. Advancements in the three-dimensional (3D) imaging has led to the development of computer aided surgical simulation using Cone Beam Computed Tomography (CBCT) images. Computer-aided surgical simulation has been adopted for planning OGS, and is used to facilitate cephalometric analysis, splint fabrication and surgery simulation. 3D imaging has significantly enhanced the visualization of the skeletal complexities of dentofacial deformities in terms of yaw rotations, occlusal plane canting and differential length of the body/ramus of the mandible [6,8]. Furthermore, 3D printing is another technical advancement which is clinically applied for creating 3D models from digital images for planning OGS. This includes occlusal splints, osteotomy guides, repositioning guides, spacers, models, and fixing plates [9]. Virtual surgical planning for OGS provides the surgeon with a clear 3D visualization of the anatomical structures for developing the surgical plan and has resulted in significant improvements in treatment outcomes [9,10].
The technological advancements in the field of medical science have led to an escalation in the development of artificial intelligence (AI) applications, which are being extensively used in health sciences. The development of AI applications intended to assist health professionals in providing quality health care and achieving higher accuracy in clinical decision making [11,12,13,14]. AI-based applications used in medical sciences are designed using algorithms that can learn from the data during the process of training, and later make predictions from unseen data in the process of testing [15,16]. Machine Learning (ML) is a subfield of AI which has been widely applied for computer-aided diagnostic support, where algorithms are applied into machines, which will facilitate their learning from data and enable them to later make predictions and resolve issues without human intervention [17]. The most recent advancement in AI is the development of deep learning, a branch of AI that is inspired by the neural network of the human brain. AI synthesizes useful and applicable knowledge from large amounts of given data that is processed through artificial neural networks [18].
Deep learning has been gaining considerable attention and popularity within the field of Dentistry, where it has been successfully utilized and has demonstrated a high level of accuracy and precision in the detection, diagnosis, assessment of the need for treatment, and prediction of disease prognosis in the oral and maxillofacial region [15,16,19,20,21,22].
Hence, this scoping review aims to outline the performance of artificial intelligence models used for diagnosing, treatment planning and predicting the prognosis of OGS.

2. Materials and Methods

2.1. Search Strategy

This scoping review conforms to the guidelines set for Preferred Reporting Items for Systematic Reviews and Meta-Analysis for Diagnostic Test Accuracy (PRISMA-DTA) [23]. Literature search was carried out in renowned electronic databases (PubMed, Google scholar, Scopus, Web of science, Embase, Cochrane, Saudi Digital Library) to identify articles that have been published between 1 January 2000 and 28 February 2022 related to the research topic. Medical Subject Headings (MeSH) such as artificial intelligence, deep learning, machine learning, automated learning, orthognathic surgeries, maxillofacial surgeries, plastic surgeries, prediction, diagnosis, and prognosis were used to search for articles on the electronic search engines. A combination of these MeSH were developed using Boolean operators such as and/or, which were further used for advanced search, using English as a language filter (see supplementary material Table S1).
In addition to the electronic searching process, a manual search was also performed in which the reference list of the initially selected articles were screened and searched at the college library. The articles were searched based on the (problem/patient/population, intervention/indicator, comparison, and outcome) PICO elements (Table 1).

2.2. Study Selection

The process of article selecting was executed in two phases. In the first phase, the selection was based on the relevance of the article title and the abstract to the search objective. Article search in this phase was carried out by 2 authors (S.B.K & F.A) independently and generated 328 articles. After screening these for duplication, 126 articles were excluded. The remaining 202 articles were evaluated using the eligibility criteria set for inclusion.

2.3. Eligibility Criteria

The articles included were: (a) original research articles based on AI applications; (b) articles that clearly indicated the type of data sets used for training/validating and for testing the AI model; (c) articles that clearly utilized quantifiable outcome measures of performance. There was no limit for the type of study design for inclusion.
The articles excluded were: (a) articles with only abstracts; (b) unpublished data such as conference papers and thesis projects uploaded online; (c) review articles, letter to editors, commentaries; (d) articles in Non-English languages.

2.4. Data Extraction

The articles were checked for eligibility based on the set criteria, following this, the number of articles included for further analysis decreased to 19. In the second phase, the journal names and author details were concealed and then distributed between the two authors (M.A & L.A) who were not involved in the initial search. To check the degree of consistency between these two authors, inter-rater reliability was assessed on a sample of articles before the allotment of the finalized articles. Cohen’s kappa showed an 88% agreement between the 2 authors. At this phase, the authors critically assessed the articles based on Quality Assessment and Diagnostic Accuracy Tool (QUADAS-2) guidelines [24]. This tool is used for quality of studies reporting on diagnostic tools, conducted based on four domains (patient selection, index test, reference standard, and flow and timing) which are assessed in terms of risk of bias and applicability concerns. Following this, the authors further disagreed regarding the inclusion of one article, which had not clearly mentioned quantifiable outcome measures of performance. An opinion was obtained from M.G and then the article was excluded. Finally, 18 articles were subjected to qualitative synthesis (Figure 1).

3. Results

Eighteen articles were finalized and assessed for quantitative data. The research trend shows that most of the research on application of AI on OGS was conducted within the last two years and the trend shows a gradual increase in this area of research.

3.1. Qualitative Synthesis of the Included Studies

AI has been applied for predicting the post-operative facial profiles and facial symmetry (n = 8) [25,26,27,28,29,30,31,32], deciding on the need for OGS (n = 5) [33,34,35,36,37], predicting blood loss prior to OGS (n = 1) [38], planning OGS (n = 2) [39,40], segmentation of maxillofacial structures for OGS (n = 1) [41], and differential diagnosis of OGS (n = 1) [42]. The data from selected articles were retrieved and entered into the data sheet (Table 2).
With this data, performing meta-analysis was not possible because of the heterogeneity in the studies in terms of software and data sets that have been used for assessing the performance of AI models. Therefore, the descriptive data is presented based on the application of AI models for which it has been designed.

3.2. Study Characteristics

Study characteristics of the articles included in this scoping review underwent qualitative synthesis (details of authors, publication year, study design, type of algorithm architecture, study objective, number of patients/images/photographs for validating and testing, study factor, study modality, comparisons, evaluation accuracy/average accuracy/statistical significance, outcomes and conclusions).

3.3. Outcome Measures

The outcome was measured in terms of task performance efficiency. The outcome measures were reported in terms of Accuracy, Specificity, Sensitivity, Area Under the Curve (AUC), Intraclass Correlation Coefficient (ICC), Receiver Operating Characteristic Curve (ROC), Statistical Significance and F1 Scores.

3.4. Risk of Bias Assessment and Applicability Concerns

The quality assessment for the 18 articles included in this study was performed using the guidelines of QUADAS-2 [24]. This tool was originally produced in 2003 by a collaboration between the Centre for Reviews and Dissemination, University of York, and the Academic Medical Centre at the University of Amsterdam. Modified versions have been adopted by Cochrane Collaboration, NICE and AHRQ. The current version is widely used in systematic reviews to evaluate the risk of bias and applicability of primary diagnostic accuracy studies. QUADAS-2 consists of four key domains: (patient selection; index test; reference standard; flow and timing). The current assessment of risk and applicability based on QUADAS-2 shows that the majority of studies have low risk while a very small number of studies show high risk of bias (see supplementary material Table S2 and Figure 2).

3.5. Assessment of Strength of Evidence

The certainty of the evidence of the included studies in this scoping review was assessed using Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. The certainty of evidence is rated based on five domains: risk of bias, inconsistency, indirectness, imprecision, or publication bias and are ultimately categorized as either very low, low, moderate, or high certainty of evidence [43] (Table 3).

4. Discussion

Facial appearance has a significant effect on an individual’s personality and social life. Morphological deformities related to the craniofacial region has a negative impact on social and mental well-being of an individual in the community. From the patients’ perspective, improving the morphology and enhancing the facial esthetics is the ultimate therapeutic goal. However, from the orthodontist point of view, planning the appropriate treatment for correcting deformities, improving oral functions, and eventually improving the esthetic appearance is a complex task to achieve.
To accomplish that, accurate diagnosis and comprehensive treatment planning through sound clinical decision making is critical. It involves predicting the changes that may occur post orthodontic treatment, deciding on the need for tooth extraction with or without OGS. The entire process is based on the clinical experience of the clinician and the availability of diagnostics tools that aid and supplement the decision-making process, planning and execution.
In the current era, where there is an enormous advancement in the field of science and technology, the cutting-edge application of AI in various fields of medical sciences has made a major impact. In dentistry, AI has been widely applied in diagnosis of oral diseases and detecting pathologies related to the orofacial region. AI has demonstrated excellent performance in diagnosis of dental caries [12,44,45,46], diagnosing pathologies related to the orofacial structures such as maxillary sinusitis [47], Sjogren’s syndrome [48], predicting lymph node metastasis [49], and osteoporosis [50,51]. Studies have also demonstrated the utilization and reported on the performance of AI in determining the need for orthodontic treatment [52,53], determining the working length and vertical root fracture in endodontic treatment [54,55], determining the degree of periodontal bone loss [56] and in the diagnosis of oral cancer [57]. There is a need for standardized criteria for clinical decision making in OGS. Currently, AI models have been developed for and applied to deciding on the need for OGS, diagnosis, treatment planning and predicting the prognosis of OGS.

4.1. Application of AI in Diagnosis and Determining the Need of OGS

OGS is considered when the occlusion that can potentially be achieved with orthodontic treatment alone is inadequate, rendering it impossible to resolve the patients’ chief complaint with a less invasive procedure. To achieve the best treatment outcomes, precise diagnosis and sound decision making is very important. Various studies have described the application of AI in the diagnosis of OGS. Choi et al. reported on applying machine learning model for determining the need for OGS using landmarks on cephalometric radiographs. This AI-based model demonstrated excellent results with 96% success rate in diagnosing cases that require surgical and non-surgical treatment and 91% accuracy in deciding on the type of surgery and the need for extraction. The limitation of this study was that the authors had excluded cases with skeletal asymmetry [33]. Another study conducted by Knoops et al. reported on using machine leaning model for automated diagnosis of cases requiring OGS using 3D images. This model demonstrated outstanding results with 95.5% sensitivity and 95.2% specificity in diagnosing OGS and mean accuracy of 1.1 ± 0.3 mm in simulating the surgical outcomes. However, the training of this model was limited with a small number of data sets, and it had also included volunteers indicated for mild OGS for validation. Performance of this model can be enhanced by integrating shape data and electronic medical records [34]. Kim et al. in his study reported of 4 AI-based models ResNet-18, 34, 50 and 101 with different depths of neural networks in diagnosis of OGS. The average success rate of these AI-based models was 93.80%, 93.60%, 91.13%, and 91.33%, respectively. ResNet-18 outperformed among the four models with AUC of 0.979. These findings confirm that the linear structure demonstrated better performance in comparison with bottleneck structure. It also confirms that for using the linear structure effectively the neural networks should not be too deep. However, in this study the data was lacking multicenter representation, as it was obtained from one single center [37].
In their study, Lin et al. reported on using a machine learning model to predict the need for OGS in patients who were treated for unilateral cleft lip and palate using lateral cephalogram variables. This AI model demonstrated an accuracy of 87.4% in predicting the need for OGS. In order to avoid selection bias, the authors included patients from a single ethnicity, cephalometric data from a younger age group of 6.3 years, patients requiring the same treatment protocol and examinations carried out by one orthodontist. A limitation of this study was the small number of sample size from one center [35]. Shin et al. in his study reported using deep learning model for predicting the need for OGS using cephalogram. This AI model demonstrated excellent performance, with an accuracy of 0.954, sensitivity of 0.844, and specificity of 0.993 in determining the need for OGS. This model utilized the data from both posterior-anterior and lateral cephalogram to achieve better accuracy. However, the limitation of this study was the data, which was obtained from a limited number of patients and from one particular hospital [36].

4.2. Application of AI in Predicting Facial Symmetry following OGS

OGS is considered with a therapeutic goal to achieve functional and esthetic corrections. Evaluation of facial asymmetry is very crucial in planning and executing OGS. A study conducted by Lin et al. assessed the degree of facial asymmetry in patients before and after OGS using a Deep Convolutional Neural Network (DCNN) based model. This model demonstrated an accuracy of 78.85% in held-out patterns and facial symmetry degree assessment within 1 degree was 98.63%. Comparison was made for assessing the difference between pre- and post-surgery facial symmetry using this model. The mean pre-operative facial symmetry degree was higher than postoperative with a significant improvement (p < 0.02). The limitation of this study was that the model was built using a small sample number and the validation was confined to data obtained from one particular center [26]. In their study, Tanikawa et al. reported on using an AI-based model for predicting the facial morphology post OGS using 3D facial images. The study reported an average system error of 0.94 mm. The success rates when the success was defined by a system error of <1 mm was 54% and for <2 mm it was 100%. However, the limitation of this study was that the data used for developing this model was obtained from only two hospitals, so more studies need to be conducted using data from other hospitals to know the performance of this model [29]. Lin et al. reported on a Convolutional Neural Networks (CNNs)-based AI model for assessing the facial symmetry before and after OGS using CBCT images. The model demonstrated an excellent performance, with accuracy of 90% in assessing facial symmetry. Although the model demonstrated excellent performance, the authors suggested that there is a need for large number of data sets in order to enhance the accuracy of the AI model [30]. In their study, Lo et al. reported on applying a CNNs-based AI model for predicting the facial symmetry before and after OGS using 3D facial images. This model was efficient in evaluating the facial symmetry. However, there were dissimilarities between the patient’s subjective view and the ML score since the assessment score represent general results [31]. A similar study conducted by Horst et al. reported on applying a deep learning model for predicting soft tissue profile for planning OGS using CBCT data. This model was efficient in predicting the 3D soft tissue profile. However, this model tended to under predict the displacements in asymmetrical cases and in cases with cranial/caudal displacements. Another limitation was the smaller number of samples used for training the model [32].

4.3. Application of AI for Planning OGS

OGS are mainly considered with an intent to correct the deformed facial structures involving the jaw bones and to restore them to symmetry and sagittal balance with esthetic corrections. A well-designed treatment plan is the key for a successful OGS. The conventional way of treatment planning is mainly dependent on the surgeons’ experience supplemented by the cephalometric analysis and study models [5]. Currently, deep learning-based automated model have been applied for OGS planning [39,40]. Xiao et al. has reported using a deep learning-based model to estimate the normal 3D bone shape models in patients with facial deformities requiring orthognathic corrections. The authors compared this model with the existing sparse representation method [58]. The deep learning model demonstrated superior estimation performance in comparison with the sparse representation method [39,40]. However, in the training process due to the lack of ground truth reference bones, this model was trained on simulated pairs of deformed-normal bones which are unlikely to cover all possible types of deformities. Hence, this model may demonstrate suboptimal performance when applied on real data with orthognathic deformities [39].

4.4. Application of AI for Predicting Blood Loss Prior to OGS

Similarly to other complex and invasive procedures, OGS is likely to have associated complications such as excessive blood loss [59]. Predicting these expected complications ahead of their occurrence can be beneficial in their management. Several studies have reported on the factors that can be analyzed for predicting and anticipating the blood loss in patients undergoing OGS [60,61,62]. A report by Stehrer et al. demonstrated the application of AI model in predicting perioperative blood loss in OGS. AI based Random Forest algorithm assessed (Bimaxillary surgery, Preoperative hematocrit, Preoperative hemoglobin, Preoperative, erythrocytes, Surgical time, Blood volume, Bilateral Sagittal Split Osteotomy (BSSO), Sex, BMI, BSSO with genioplasty, Age, Le Fort 1 Osteotomy) for predicting the perioperative blood loss. This ML-based AI model showed statistically significant correlation between the actual and predicted perioperative blood loss. (p < 0.001). Although this model demonstrated good performance on the data obtained from one particular clinic, the authors mention that the model may not perform precisely on data sets obtained from other clinics or another patient population [39].

5. Conclusions

OGS, which are mainly planned for correcting the skeletal and facial asymmetry, facial profile and deformities, requires a vast clinical experience. The application of AI models in OGS are mostly based on machine learning and deep learning architecture. These models learn the deep features for image recognition after being trained by data sets. Most of the models used in OGS have reported of using 3-D facial Images and CBCT which require limited radiation exposure. These models have proven to be efficient and have outperformed in comparison with the conventional methods. These models are reported to be reliable and reproducible, hence can be very useful as an aid for less experienced practitioners in clinical decision making and in achieving better clinical outcomes. In addition to this, the application of AI technology in dentistry has been found to be a cost-effective approach since these models have demonstrated higher accuracy in comparison with experienced specialists. Early diagnosis and prediction of the need for OGS using AI applications can be of great use for clinicians in planning the time and duration of the treatment. However, most of the studies state that in order to enhance the performance of the AI-based models, there is a need for additional training of these models on large number of data sets obtained from multiple centers and different populations.

Supplementary Materials

The following supporting information can be downloaded at: https://0-www-mdpi-com.brum.beds.ac.uk/article/10.3390/app12115581/s1. Table S1: Structured search strategy carried out in electronic databases; Table S2: Assessment of risk of bias domains and applicability concerns.

Author Contributions

Conceptualization, S.B.K. and K.A.; methodology, L.A.; software, M.A.; validation, F.A., L.A. and M.A.A.; formal analysis, M.A.; investigation, S.B.K.; resources, K.A.; data curation, S.B.K.; writing—original draft preparation, S.B.K.; writing—review and editing, L.A.; visualization, F.A.; supervision, K.A.; project administration, M.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Larsson, P.; Bondemark, L.; Häggman-Henrikson, B. The Impact of Oro-Facial Appearance on Oral Health-Related Quality of Life: A Systematic Review. J. Oral Rehabil. 2021, 48, 271–281. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Naini, F.B.; Donaldson, A.N.A.; McDonald, F.; Cobourne, M.T. Assessing the Influence of Asymmeftry Affecting the Mandible and Chin Point on Perceived Attractiveness in the Orthognathic Patient, Clinician, and Layperson. J. Oral Maxillofac. Surg. 2012, 70, 192–206. [Google Scholar] [CrossRef] [PubMed]
  3. Jackson, T.H.; Mitroff, S.R.; Clark, K.; Proffit, W.R.; Lee, J.Y.; Nguyen, T.T. Face Symmetry Assessment Abilities: Clinical Implications for Diagnosing Asymmetry. Am. J. Orthod. Dentofac. Orthop. 2013, 144, 663–671. [Google Scholar] [CrossRef] [Green Version]
  4. Olivetti, E.C.; Nicotera, S.; Marcolin, F.; Vezzetti, E.; Sotong, J.P.A.; Zavattero, E.; Ramieri, G. 3D Soft-Tissue Prediction Methodologies for Orthognathic Surgery—A Literature Review. Appl. Sci. 2019, 9, 4550. [Google Scholar] [CrossRef] [Green Version]
  5. Xia, J.J.; Gateno, J.; Teichgraeber, J.F.; Yuan, P.; Chen, K.-C.; Li, J.; Zhang, X.; Tang, Z.; Alfi, D.M. Algorithm for Planning a Double-Jaw Orthognathic Surgery Using a Computer-Aided Surgical Simulation (CASS) Protocol. Part 1: Planning Sequence. Int. J. Oral Maxillofac. Surg. 2015, 44, 1431–1440. [Google Scholar] [CrossRef] [Green Version]
  6. Ho, C.-T.; Lin, H.-H.; Liou, E.J.W.; Lo, L.-J. Three-Dimensional Surgical Simulation Improves the Planning for Correction of Facial Prognathism and Asymmetry: A Qualitative and Quantitative Study. Sci. Rep. 2017, 7, 40423. [Google Scholar] [CrossRef]
  7. Xia, J.J.; Gateno, J.; Teichgraeber, J.F. New Clinical Protocol to Evaluate Craniomaxillofacial Deformity and Plan Surgical Correction. J. Oral Maxillofac. Surg. 2009, 67, 2093–2106. [Google Scholar] [CrossRef] [Green Version]
  8. Wu, T.-Y.; Lin, H.-H.; Lo, L.-J.; Ho, C.-T. Postoperative Outcomes of Two- and Three-Dimensional Planning in Orthognathic Surgery: A Comparative Study. J. Plast. Reconstr. Aesthet. Surg. 2017, 70, 1101–1111. [Google Scholar] [CrossRef]
  9. Lin, H.-H.; Lonic, D.; Lo, L.-J. 3D Printing in Orthognathic Surgery—A Literature Review. J. Formos. Med. Assoc. 2018, 117, 547–558. [Google Scholar] [CrossRef]
  10. Alkhayer, A.; Piffkó, J.; Lippold, C.; Segatto, E. Accuracy of Virtual Planning in Orthognathic Surgery: A Systematic Review. Head Face Med. 2020, 16, 34. [Google Scholar] [CrossRef]
  11. Medeiros, F.A.; Jammal, A.A.; Thompson, A.C. From Machine to Machine: An OCT-Trained Deep Learning Algorithm for Objective Quantification of Glaucomatous Damage in Fundus Photographs. Ophthalmology 2019, 126, 513–521. [Google Scholar] [CrossRef] [PubMed]
  12. Zheng, X.; Yao, Z.; Huang, Y.; Yu, Y.; Wang, Y.; Liu, Y.; Mao, R.; Li, F.; Xiao, Y.; Wang, Y.; et al. Deep Learning Radiomics Can Predict Axillary Lymph Node Status in Early-Stage Breast Cancer. Nat. Commun. 2020, 11, 1236. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef] [PubMed]
  14. Li, R.; Xiao, C.; Huang, Y.; Hassan, H.; Huang, B. Deep Learning Applications in Computed Tomography Images for Pulmonary Nodule Detection and Diagnosis: A Review. Diagnostics 2022, 12, 298. [Google Scholar] [CrossRef] [PubMed]
  15. Kwon, O.; Yong, T.-H.; Kang, S.-R.; Kim, J.-E.; Huh, K.-H.; Heo, M.-S.; Lee, S.-S.; Choi, S.-C.; Yi, W.-J. Automatic Diagnosis for Cysts and Tumors of Both Jaws on Panoramic Radiographs Using a Deep Convolution Neural Network. Dentomaxillofac. Radiol. 2020, 49, 20200185. [Google Scholar] [CrossRef]
  16. Hung, K.; Montalvao, C.; Tanaka, R.; Kawai, T.; Bornstein, M.M. The Use and Performance of Artificial Intelligence Applications in Dental and Maxillofacial Radiology: A Systematic Review. Dentomaxillofac. Radiol. 2020, 49, 20190107. [Google Scholar] [CrossRef]
  17. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  18. Sayres, R.; Taly, A.; Rahimy, E.; Blumer, K.; Coz, D.; Hammel, N.; Krause, J.; Narayanaswamy, A.; Rastegar, Z.; Wu, D.; et al. Using a Deep Learning Algorithm and Integrated Gradients Explanation to Assist Grading for Diabetic Retinopathy. Ophthalmology 2019, 126, 552–564. [Google Scholar] [CrossRef] [Green Version]
  19. Lee, J.-H.; Kim, D.-H.; Jeong, S.-N.; Choi, S.-H. Detection and Diagnosis of Dental Caries Using a Deep Learning-Based Convolutional Neural Network Algorithm. J. Dent. 2018, 77, 106–111. [Google Scholar] [CrossRef]
  20. Chang, H.-J.; Lee, S.-J.; Yong, T.-H.; Shin, N.-Y.; Jang, B.-G.; Kim, J.-E.; Huh, K.-H.; Lee, S.-S.; Heo, M.-S.; Choi, S.-C.; et al. Deep Learning Hybrid Method to Automatically Diagnose Periodontal Bone Loss and Stage Periodontitis. Sci. Rep. 2020, 10, 7531. [Google Scholar] [CrossRef]
  21. Yang, H.; Jo, E.; Kim, H.J.; Cha, I.-H.; Jung, Y.-S.; Nam, W.; Kim, J.-Y.; Kim, J.-K.; Kim, Y.H.; Oh, T.G.; et al. Deep Learning for Automated Detection of Cyst and Tumors of the Jaw in Panoramic Radiographs. J. Clin. Med. 2020, 9, 1839. [Google Scholar] [CrossRef] [PubMed]
  22. Lee, J.-H.; Kim, D.-H.; Jeong, S.-N.; Choi, S.-H. Diagnosis and Prediction of Periodontally Compromised Teeth Using a Deep Learning-Based Convolutional Neural Network Algorithm. J. Periodontal Implant Sci. 2018, 48, 114. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. McGrath, T.A.; Alabousi, M.; Skidmore, B.; Korevaar, D.A.; Bossuyt, P.M.M.; Moher, D.; Thombs, B.; McInnes, M.D.F. Recommendations for Reporting of Systematic Reviews and Meta-Analyses of Diagnostic Test Accuracy: A Systematic Review. Syst. Rev. 2017, 6, 194. [Google Scholar] [CrossRef]
  24. Whiting, P.F.; Rutjes, A.W.S.; Westwood, M.E.; Mallett, S.; Deeks, J.J.; Reitsma, J.B.; Leeflang, M.M.G.; Sterne, J.A.C.; Bossuyt, P.M.M.; QUADAS-2 Group. QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies. Ann. Intern. Med. 2011, 155, 529–536. [Google Scholar] [CrossRef] [PubMed]
  25. Lu, C.-H.; Ko, E.W.-C.; Liu, L. Improving the Video Imaging Prediction of Postsurgical Facial Profiles with an Artificial Neural Network. J. Dent. Sci. 2009, 4, 118–129. [Google Scholar] [CrossRef] [Green Version]
  26. Lin, H.-H.; Lo, L.-J.; Chiang, W.-C.; Chen, C.-F. An Automatic Assessment of Facial Symmetry before and after Orthognathic Surgery Based on Three-Dimensional Contour Features using Deep Learning System. Available online: http://www.iraj.in/journal/journal_file/journal_pdf/6-462-153034747838-41.pdf (accessed on 1 February 2022).
  27. Patcas, R.; Bernini, D.A.J.; Volokitin, A.; Agustsson, E.; Rothe, R.; Timofte, R. Applying Artificial Intelligence to Assess the Impact of Orthognathic Treatment on Facial Attractiveness and Estimated Age. Int. J. Oral Maxillofac. Surg. 2019, 48, 77–83. [Google Scholar] [CrossRef] [Green Version]
  28. Jeong, S.H.; Yun, J.P.; Yeom, H.-G.; Lim, H.J.; Lee, J.; Kim, B.C. Deep Learning Based Discrimination of Soft Tissue Profiles Requiring Orthognathic Surgery by Facial Photographs. Sci. Rep. 2020, 10, 16235. [Google Scholar] [CrossRef]
  29. Tanikawa, C.; Yamashiro, T. Development of Novel Artificial Intelligence Systems to Predict Facial Morphology after Orthognathic Surgery and Orthodontic Treatment in Japanese Patients. Sci. Rep. 2021, 11, 15853. [Google Scholar] [CrossRef]
  30. Lin, H.-H.; Chiang, W.-C.; Yang, C.-T.; Cheng, C.-T.; Zhang, T.; Lo, L.-J. On Construction of Transfer Learning for Facial Symmetry Assessment before and after Orthognathic Surgery. Comput. Methods Programs Biomed. 2021, 200, 105928. [Google Scholar] [CrossRef]
  31. Lo, L.-J.; Yang, C.-T.; Ho, C.-T.; Liao, C.-H.; Lin, H.-H. Automatic Assessment of 3-Dimensional Facial Soft Tissue Symmetry before and after Orthognathic Surgery Using a Machine Learning Model: A Preliminary Experience. Ann. Plast. Surg. 2021, 86, S224–S228. [Google Scholar] [CrossRef]
  32. Ter Horst, R.; van Weert, H.; Loonen, T.; Bergé, S.; Vinayahalingam, S.; Baan, F.; Maal, T.; de Jong, G.; Xi, T. Three-Dimensional Virtual Planning in Mandibular Advancement Surgery: Soft Tissue Prediction Based on Deep Learning. J. Craniomaxillofac. Surg. 2021, 49, 775–782. [Google Scholar] [CrossRef] [PubMed]
  33. Choi, H.-I.; Jung, S.-K.; Baek, S.-H.; Lim, W.H.; Ahn, S.-J.; Yang, I.-H.; Kim, T.-W. Artificial Intelligent Model with Neural Network Machine Learning for the Diagnosis of Orthognathic Surgery. J. Craniofac. Surg. 2019, 30, 1986–1989. [Google Scholar] [CrossRef] [PubMed]
  34. Knoops, P.G.M.; Papaioannou, A.; Borghi, A.; Breakey, R.W.F.; Wilson, A.T.; Jeelani, O.; Zafeiriou, S.; Steinbacher, D.; Padwa, B.L.; Dunaway, D.J.; et al. A Machine Learning Framework for Automated Diagnosis and Computer-Assisted Planning in Plastic and Reconstructive Surgery. Sci. Rep. 2019, 9, 13597. [Google Scholar] [CrossRef]
  35. Lin, G.; Kim, P.-J.; Baek, S.-H.; Kim, H.-G.; Kim, S.-W.; Chung, J.-H. Early Prediction of the Need for Orthognathic Surgery in Patients with Repaired Unilateral Cleft Lip and Palate Using Machine Learning and Longitudinal Lateral Cephalometric Analysis Data. J. Craniofac. Surg. 2021, 32, 616–620. [Google Scholar] [CrossRef] [PubMed]
  36. Shin, W.; Yeom, H.-G.; Lee, G.H.; Yun, J.P.; Jeong, S.H.; Lee, J.H.; Kim, H.K.; Kim, B.C. Deep Learning Based Prediction of Necessity for Orthognathic Surgery of Skeletal Malocclusion Using Cephalogram in Korean Individuals. BMC Oral Health 2021, 21, 130. [Google Scholar] [CrossRef] [PubMed]
  37. Kim, Y.-H.; Park, J.-B.; Chang, M.-S.; Ryu, J.-J.; Lim, W.H.; Jung, S.-K. Influence of the Depth of the Convolutional Neural Networks on an Artificial Intelligence Model for Diagnosis of Orthognathic Surgery. J. Pers. Med. 2021, 11, 356. [Google Scholar] [CrossRef]
  38. Stehrer, R.; Hingsammer, L.; Staudigl, C.; Hunger, S.; Malek, M.; Jacob, M.; Meier, J. Machine Learning Based Prediction of Perioperative Blood Loss in Orthognathic Surgery. J. Craniomaxillofac. Surg. 2019, 47, 1676–1681. [Google Scholar] [CrossRef]
  39. Xiao, D.; Lian, C.; Deng, H.; Kuang, T.; Liu, Q.; Ma, L.; Kim, D.; Lang, Y.; Chen, X.; Gateno, J.; et al. Estimating Reference Bony Shape Models for Orthognathic Surgical Planning Using 3D Point-Cloud Deep Learning. IEEE J. Biomed. Health Inform. 2021, 25, 2958–2966. [Google Scholar] [CrossRef]
  40. Xiao, D.; Deng, H.; Lian, C.; Kuang, T.; Liu, Q.; Ma, L.; Lang, Y.; Chen, X.; Kim, D.; Gateno, J.; et al. Unsupervised Learning of Reference Bony Shapes for Orthognathic Surgical Planning with a Surface Deformation Network. Med. Phys. 2021, 48, 7735–7746. [Google Scholar] [CrossRef]
  41. Dot, G.; Schouman, T.; Dubois, G.; Rouch, P.; Gajny, L. Fully Automatic Segmentation of Craniomaxillofacial CT Scans for Computer-Assisted Orthognathic Surgery Planning Using the NnU-Net Framework. Eur. Radiol. 2022, 32, 3639–3648. [Google Scholar] [CrossRef]
  42. Lee, K.-S.; Ryu, J.-J.; Jang, H.S.; Lee, D.-Y.; Jung, S.-K. Deep Convolutional Neural Networks Based Analysis of Cephalometric Radiographs for Differential Diagnosis of Orthognathic Surgery Indications. Appl. Sci. 2020, 10, 2124. [Google Scholar] [CrossRef] [Green Version]
  43. Granholm, A.; Alhazzani, W.; Møller, M.H. Use of the GRADE Approach in Systematic Reviews and Guidelines. Br. J. Anaesth. 2019, 123, 554–559. [Google Scholar] [CrossRef] [PubMed]
  44. Casalegno, F.; Newton, T.; Daher, R.; Abdelaziz, M.; Lodi-Rizzini, A.; Schürmann, F.; Krejci, I.; Markram, H. Caries Detection with Near-Infrared Transillumination Using Deep Learning. J. Dent. Res. 2019, 98, 1227–1233. [Google Scholar] [CrossRef] [Green Version]
  45. Devito, K.L.; de Souza Barbosa, F.; Felippe Filho, W.N. An Artificial Multilayer Perceptron Neural Network for Diagnosis of Proximal Dental Caries. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endod. 2008, 106, 879–884. [Google Scholar] [CrossRef] [PubMed]
  46. Hung, M.; Voss, M.W.; Rosales, M.N.; Li, W.; Su, W.; Xu, J.; Bounsanga, J.; Ruiz-Negrón, B.; Lauren, E.; Licari, F.W. Application of Machine Learning for Diagnostic Prediction of Root Caries. Gerodontology 2019, 36, 395–404. [Google Scholar] [CrossRef]
  47. Murata, M.; Ariji, Y.; Ohashi, Y.; Kawai, T.; Fukuda, M.; Funakoshi, T.; Kise, Y.; Nozawa, M.; Katsumata, A.; Fujita, H.; et al. Deep-Learning Classification Using Convolutional Neural Network for Evaluation of Maxillary Sinusitis on Panoramic Radiography. Oral Radiol. 2019, 35, 301–307. [Google Scholar] [CrossRef]
  48. Kise, Y.; Ikeda, H.; Fujii, T.; Fukuda, M.; Ariji, Y.; Fujita, H.; Katsumata, A.; Ariji, E. Preliminary Study on the Application of Deep Learning System to Diagnosis of Sjögren’s Syndrome on CT Images. Dentomaxillofac. Radiol. 2019, 48, 20190019. [Google Scholar] [CrossRef] [PubMed]
  49. Ariji, Y.; Sugita, Y.; Nagao, T.; Nakayama, A.; Fukuda, M.; Kise, Y.; Nozawa, M.; Nishiyama, M.; Katumata, A.; Ariji, E. CT Evaluation of Extranodal Extension of Cervical Lymph Node Metastases in Patients with Oral Squamous Cell Carcinoma Using Deep Learning Classification. Oral Radiol. 2020, 36, 148–155. [Google Scholar] [CrossRef]
  50. Lee, K.-S.; Jung, S.-K.; Ryu, J.-J.; Shin, S.-W.; Choi, J. Evaluation of Transfer Learning with Deep Convolutional Neural Networks for Screening Osteoporosis in Dental Panoramic Radiographs. J. Clin. Med. 2020, 9, 392. [Google Scholar] [CrossRef] [Green Version]
  51. Lee, J.-S.; Adhikari, S.; Liu, L.; Jeong, H.-G.; Kim, H.; Yoon, S.-J. Osteoporosis Detection in Panoramic Radiographs Using a Deep Convolutional Neural Network-Based Computer-Assisted Diagnosis System: A Preliminary Study. Dentomaxillofac. Radiol. 2019, 48, 20170344. [Google Scholar] [CrossRef]
  52. Xie, X.; Wang, L.; Wang, A. Artificial Neural Network Modeling for Deciding If Extractions Are Necessary Prior to Orthodontic Treatment. Angle Orthod. 2010, 80, 262–266. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Thanathornwong, B. Bayesian-Based Decision Support System for Assessing the Needs for Orthodontic Treatment. Healthc. Inform. Res. 2018, 24, 22–28. [Google Scholar] [CrossRef] [PubMed]
  54. Saghiri, M.A.; Asgar, K.; Boukani, K.K.; Lotfi, M.; Aghili, H.; Delvarani, A.; Karamifar, K.; Saghiri, A.M.; Mehrvarzfar, P.; Garcia-Godoy, F. A New Approach for Locating the Minor Apical Foramen Using an Artificial Neural Network: Artificial Neural Network in Dentistry. Int. Endod. J. 2012, 45, 257–265. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Johari, M.; Esmaeili, F.; Andalib, A.; Garjani, S.; Saberkari, H. Detection of Vertical Root Fractures in Intact and Endodontically Treated Premolar Teeth by Designing a Probabilistic Neural Network: An Ex Vivo Study. Dentomaxillofac. Radiol. 2017, 46, 20160107. [Google Scholar] [CrossRef] [Green Version]
  56. Krois, J.; Ekert, T.; Meinhold, L.; Golla, T.; Kharbot, B.; Wittemeier, A.; Dörfer, C.; Schwendicke, F. Deep Learning for the Radiographic Detection of Periodontal Bone Loss. Sci. Rep. 2019, 9, 8495. [Google Scholar] [CrossRef]
  57. Aubreville, M.; Knipfer, C.; Oetter, N.; Jaremenko, C.; Rodner, E.; Denzler, J.; Bohr, C.; Neumann, H.; Stelzle, F.; Maier, A. Automatic Classification of Cancerous Tissue in Laserendomicroscopy Images of the Oral Cavity Using Deep Learning. Sci. Rep. 2017, 7, 11979. [Google Scholar] [CrossRef] [Green Version]
  58. Wang, L.; Ren, Y.; Gao, Y.; Tang, Z.; Chen, K.-C.; Li, J.; Shen, S.G.F.; Yan, J.; Lee, P.K.M.; Chow, B.; et al. Estimating Patient-Specific and Anatomically Correct Reference Model for Craniomaxillofacial Deformity via Sparse Representation: Estimating Patient-Specific and Anatomically Correct Reference Model. Med. Phys. 2015, 42, 5809–5816. [Google Scholar] [CrossRef] [Green Version]
  59. Piñeiro-Aguilar, A.; Somoza-Martín, M.; Gandara-Rey, J.M.; García-García, A. Blood Loss in Orthognathic Surgery: A Systematic Review. J. Oral Maxillofac. Surg. 2011, 69, 885–892. [Google Scholar] [CrossRef]
  60. Olsen, J.J.; Ingerslev, J.; Thorn, J.J.; Pinholt, E.M.; Gram, J.B.; Sidelmann, J.J. Can Preoperative Sex-Related Differences in Hemostatic Parameters Predict Bleeding in Orthognathic Surgery? J. Oral Maxillofac. Surg. 2016, 74, 1637–1642. [Google Scholar] [CrossRef]
  61. Thastum, M.; Andersen, K.; Rude, K.; Nørholt, S.E.; Blomlöf, J. Factors Influencing Intraoperative Blood Loss in Orthognathic Surgery. Int. J. Oral Maxillofac. Surg. 2016, 45, 1070–1073. [Google Scholar] [CrossRef]
  62. Salma, R.G.; Al-Shammari, F.M.; Al-Garni, B.A.; Al-Qarzaee, M.A. Operative Time, Blood Loss, Hemoglobin Drop, Blood Transfusion, and Hospital Stay in Orthognathic Surgery. Oral Maxillofac. Surg. 2017, 21, 259–266. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flow chart for screening and selection of articles.
Figure 1. Flow chart for screening and selection of articles.
Applsci 12 05581 g001
Figure 2. QUADAS-2 assessment of the individual risk of bias domains and applicability concerns.
Figure 2. QUADAS-2 assessment of the individual risk of bias domains and applicability concerns.
Applsci 12 05581 g002
Table 1. Description of the PICO (P = Population, I = Intervention, C = Comparison, O = Outcome) elements.
Table 1. Description of the PICO (P = Population, I = Intervention, C = Comparison, O = Outcome) elements.
Research questionWhat are the AI applications designed for OGS, and its performance in diagnosis, planning and prediction of the prognosis of OGS
PopulationPatients who underwent investigations for OGS (Maxillary Osteotomy, Mandibular Osteotomy, Bilateral Sagittal Split Osteotomy (BSSO), Genioplasty, Le Fort 1 Osteotomy)
InterventionAI applications for diagnosis, treatment planning and prediction of the prognosis of OGS
Comparison Specialist opinions, Reference standards
Outcome Measurable or predictive outcomes such as Accuracy, Sensitivity, Specificity, ROC = Receiver Operating Characteristic curve, AUC = Area Under the Curve, ICC = Intraclass Correlation Coefficient, Statistical Significance, F1 Scores, vDSC: Volumetric Dice Similarity Coefficient, sDSC: Surface Dice Similarity Coefficient
Table 2. Details of the studies that have used AI-based models in the diagnosis, treatment planning and prognosis of OGS.
Table 2. Details of the studies that have used AI-based models in the diagnosis, treatment planning and prognosis of OGS.
Serial No.AuthorsYear of PublicationStudy DesignAlgorithm ArchitectureObjective of the StudyNo. of Patients/Images/Photographs for TestingStudy FactorModalityComparison if anyEvaluation Accuracy /Average Accuracy/Statistical SignificanceResults
(+)Effective, (−)Non Effective
(N) Neutral
OutcomesAuthors Suggestions/Conclusions
1C.H Lu et al. [25]2009Retrospective Cohort studyANNs To evaluate post-OGS image prediction using the AI model 30 subjectsLandmarksLateral Cephalogram
Facial images
Compared with actual profile post- surgeryMost of the prediction errors were <1 mm(+)Effective ANNs are able to predict the post-surgical facial profile The model might be more reliable and accurate in predictions if more variables are considered
2H. H Lin et al. [26]2018Case Control study CNNs To assess the degree of facial asymmetry in patients who had undergone OGS 100 subjectsLandmarks3D facial imagesSpecialist78.85% accuracy on held-out test patterns facial symmetry degree assessment within 1 degree was 98.63%
Assessment of pre-surgery and post-surgery: the predications were statistically significant p < 0.05
(+)Effective This model is an efficient decision making tool This automated model can be useful in clinics for assessing the pre and post-operative facial symmetry
3R. Patcas et al. [27]2019Case Control study CNNs AI model for assessing the impact of OGS on facial attractiveness and estimating the age 146 subjects
(2164 photographs)
Landmarks Facial photographs Compared with actual profile post- surgery66.4% patients appearance improved post-surgery which was in comparison with the actual improvement post-surgery 74.7%(+)Effective This model is efficient in scoring face attractiveness and apparent age This model outperformed past approaches and can be considered for clinical application.
4H-Il Choi et al. [33]2019Case Control study ANNs Decision making on surgery/non surgery, type of surgery and assessing the need for extractions 316 subjects (204 for training
112 for testing)
Landmarks Lateral Cephalogram 1 Orthodontic specialistICC were ranging between 0.97–0.99.
Accuracy of 96% for surgery/non-surgery decision making
91% for diagnosing type of surgery and decision making in extractions
(+)Effective ANN model demonstrated excellent reliability This model could be applied in the diagnosis of OGS
5P. G. M. Knoops et al. [34]2019Retrospective Cohort studyCNNsAutomated model for diagnosing and clinical decision making of OGSTrained with 4261 3D Facial images Tested with 151 subjects (273 3D Facial images)LandmarksData sets
3D face scans
Not mentioned95.5% sensitivity, 95.2% specificity, mean accuracy of 1.1 ± 0.3 mm(+)EffectiveEfficient in diagnosing, risk stratification, treatment simulation.The model is efficient in clinical decision making
6R.Stehrer et al. [38]2019Case Control study CNNsTo predict perioperative blood loss prior to OGS950 subjects
80% for training
20% for testing
Correlation between actual and predicted perioperative blood lossData setsData on actual blood lossStatistical significance (p < 0.001).(+)EffectiveEfficient in predicting perioperative blood lossThis model is helpful in predicting blood loss prior to OGS
7S.H.Jeong et al. [28]2020Interventional Cohort studyCNNsTo determine the ability of the CNN model in predicting soft tissue profiles requiring OGS822 subjects
411 requiring OGS
411 not requiring OGS
LandmarksFacial photographs
2 orthodontist, 3 maxillofacial surgeons,
1 maxillofacial radiologist.
Accuracy = 0.893, Precision =0.912, recall = 0.867, and F1 score = 0.889(+)EffectiveEfficient in predicting soft tissue profiles requiring orthognathic surgeryThis model can judge soft tissue profiles requiring OGS using facial photographs
8K.S. Lee et al. [42]2020Cohort studyDCNNsTo evaluate the DCNN-based model designed for differential diagnosis of OGS220 cases for training and 73 for validationLandmarks Lateral Cephalogram Four different models Modified-Alexnet, MobileNet, and Resnet50 were usedModified-Alexnet, MobileNet, and Resnet50 demonstrated AUC 0.969, 0.908 0.923.
Accuracy 0.919, 0.838, 0.838.
Sensitivity 0.852, 0.761, 0.750.
Specificity 0.973, 0.931, 0.944 ‘respectively’
(+)EffectiveModified-Alexnet demonstrated the highest level of performanceThese models can be successfully applied for differential diagnosis of OGS
9C.Tanikawa et al. [29]2020Case Control study ANNsAI model for predicting the facial morphology after OGS and orthodontic treatment137 subjects (72 OGS and 65 orthodontic treatment)LandmarksLateral
cephalogram and 3-D facial images
2 AI models (System S) for OGS and (System E) for orthodontic treatmentSuccess rates, when system error of <1 mm, were 54% and 98%and for system error of <2 mm success rates were 100% for both(+)EffectiveSuccess rate for the models was 100% when system error was set of <2 mm
These models are clinically acceptable for predicting facial morphology
10D. Xiao et al. [39]2021Case Control study CNNsAI model for OGS planningCT scans of 47 normal subjects for training, 24 CT scans for testingLandmarksCT Scans
Clinical data sets
Landmark-based sparse representationAI model was significantly more accurate (p < 0.05) than LSR(+)EffectiveThe model demonstrated significant performance improvementsThis AI -based model
generates accurate shape models that meet clinical standards
11D. Xiao et al. [40]2021Cohort studyCNNsAI model DefNet for estimating patient-specific reference models for planning OGS.CT scans of 47 subjectsLandmarksCT Scans
Clinical data sets
Sparse representation methodVertex distance (VD), edge-length distance (ED), were significantly smaller than the SR method (p < 0.05).(+)EffectiveThe model demonstrated comparable performance for the synthetic data and better performance for the real data.This projected model outperforms an existing sparse representation method
12G. Lin et al. [35]2021Cohort studyCNNsAI model for determining the need for OGS in Unilateral Cleft Lip and Palate patients56 subjectsLandmarks Lateral Cephalogram Boruta methodAccuracy of 87.4%. F1-score of 0.714, Sensitivity 97.83%, Specificity 90.00%(+)EffectiveThe XGBoost algorithm demonstracted high accuracy in predictionThis model can be applied for predicting the need for OGS in correcting the sagittal discrepancies
13H.H.Lin et al. [30]2021Case Control study CNNsAI model for assessing facial symmetry before and after OGS71 subjectsLandmarksCBCT images4 orthodontists and 4 plastic surgeons and also with previously reported models
VGG16, VGG19, ResNet50, and Xception
Accuracy of 90%.(+)EffectiveXception model and the constant data amplification approach achieved the highest accuracyThis model successfully demonstrated prediction of facial asymmetry before and after surgery
14L.J. Lo et al. [31]2021Retrospective Cohort studyCNNsAI model for assessing facial soft tissue symmetry before and after OGS158 subjectsLandmarks3-D facial photographsPre and post- operativeMean score significant improvements from2.74 to 3.52(+)EffectiveThe model demonstrated results that can aid clinicians in assessing facial symmetryThis model can be integrated as a 3D surgical simulation model for effective treatment planning
15R.Horst et al. [32]2021Case Control study CNNsAI model to predict the virtual soft tissue profile after mandibular advancement surgery133 subjects (119 for training, 14 for testing)Landmarks3D photographs and CBCT imagesMass Tensor Model (MTM)Mean absolute Error was 1.0 ± 0.6 mm and was lower that of MTM, which was statistically significant (p = 0.02),(+)EffectiveThis model demonstrated higher accuracy compared to MTM.This model can successfully predict 3D soft tissue profiles following mandibular advancement surgery.
16W.S.Shin et al. [36]2021Cohort studyCNNsAI model to predict the need for OGS using cephalogram.413 subjectsLandmarks Cephalogram 2 orthodontists, 3 maxillofacial surgeons, 1 maxillofacial radiologist.Accuracy of 0.954, sensitivity of 0.844, and specificity of 0.993(+)EffectiveThis model demonstrated higher accuracy in predicting the need for OGSThis model will assist specialists as well as general dentists in decision making
17Y.H Kim et al. [37]2021Case Control study CNNsAI model to diagnose cases requiring orthodontic surgery using 4 models ResNet-18, 34, 50, and 101960 subjects (810 for training, 150 for testing)Landmarks Cephalogram ResNet-18, 34, 50, and 101Success rate ResNet-18 = 93.80%, ResNet-34 = 93.60%, ResNet-50 = 91.13%, and ResNet -101was 91.33%
AUC for ResNet-18 = 0.979, ResNet-34 = 0.974, ResNet-50 = 0.945, and ResNet -101 = 0.944
(+)EffectiveResNet-18 and 34 demonstrated high prediction performance accuracy in comparison with the ResNet-50 or 101 modelsThese models demonstrated good accuracies in predicting the need for 0GS
18G. Dot et. al. [41]2022Cohort studyCNNsTo evaluate the performance of deep learning model for multi-task segmentation of cranio-maxillofacial structures for OGSCT scans of 453 subjects
(300 for training, 153 for testing)
LandmarksCT ScansGround truth segmentations generated by 2 operatorsMean total vDSC and sDSC were 92.24 ± 6.19 and 98.03 ± 2.48 ‘respectively’(+)EffectiveThe AI model demonstrated adequate reliabilityThis model can be
be trained easily using more data sets for better performance
ANNs = Artificial Neural Networks, CNNs = Convolutional Neural Networks, DCNNs = Deep Neural Networks, c-index- concordance index, CT—scans Computed Tomography, CBCT—Cone-Beam Computed Tomography.
Table 3. Assessment of Strength of Evidence.
Table 3. Assessment of Strength of Evidence.
Outcome Inconsistency Indirectness Imprecision Risk of Bias Strength of Evidence
Application of AI diagnosis and determining the need of OGS [33,34,35,36,37]Not Present Not PresentNot PresentNot Present⨁⨁⨁⨁
Application of AI in differential diagnosis of OGS [42].Not Present Not PresentNot PresentNot Present⨁⨁⨁⨁
Application of AI for predicting the post-operative facial profiles and facial symmetry [25,26,27,28,29,30,31,32].Not Present Not PresentNot PresentNot Present⨁⨁⨁⨁
Application of AI for planning OGS [39,40].Not Present Not PresentNot PresentPresent⨁⨁⨁◯
Application of segmentation of maxillofacial structures for OGS [41].Not Present Not PresentNot PresentNot Present⨁⨁⨁⨁
Application of AI for predicting blood loss prior to OGS [38].Not Present Not PresentNot PresentPresent⨁⨁⨁◯
⨁⨁⨁⨁ High Evidence; ⨁⨁⨁◯ Moderate Evidence.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Khanagar, S.B.; Alfouzan, K.; Awawdeh, M.; Alkadi, L.; Albalawi, F.; Alghilan, M.A. Performance of Artificial Intelligence Models Designed for Diagnosis, Treatment Planning and Predicting Prognosis of Orthognathic Surgery (OGS)—A Scoping Review. Appl. Sci. 2022, 12, 5581. https://0-doi-org.brum.beds.ac.uk/10.3390/app12115581

AMA Style

Khanagar SB, Alfouzan K, Awawdeh M, Alkadi L, Albalawi F, Alghilan MA. Performance of Artificial Intelligence Models Designed for Diagnosis, Treatment Planning and Predicting Prognosis of Orthognathic Surgery (OGS)—A Scoping Review. Applied Sciences. 2022; 12(11):5581. https://0-doi-org.brum.beds.ac.uk/10.3390/app12115581

Chicago/Turabian Style

Khanagar, Sanjeev B., Khalid Alfouzan, Mohammed Awawdeh, Lubna Alkadi, Farraj Albalawi, and Maryam A. Alghilan. 2022. "Performance of Artificial Intelligence Models Designed for Diagnosis, Treatment Planning and Predicting Prognosis of Orthognathic Surgery (OGS)—A Scoping Review" Applied Sciences 12, no. 11: 5581. https://0-doi-org.brum.beds.ac.uk/10.3390/app12115581

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop