Next Article in Journal
Low-Density Lipoproteins Increase Proliferation, Invasion, and Chemoresistance via an Exosome Autocrine Mechanism in MDA-MB-231 Chemoresistant Cells
Next Article in Special Issue
Possible Causal Association between Type 2 Diabetes and Glycaemic Traits in Primary Open-Angle Glaucoma: A Two-Sample Mendelian Randomisation Study
Previous Article in Journal
Intestinal Dysbiosis: Microbial Imbalance Impacts on Colorectal Cancer Initiation, Progression and Disease Mitigation
Previous Article in Special Issue
Immune Analysis Using Vitreous Optical Coherence Tomography Imaging in Rats with Steroid-Induced Glaucoma
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Diagnosis of Glaucoma Based on Few-Shot Learning with Wide-Field Optical Coherence Tomography Angiography

1
Department of Artificial Intelligence, Hanyang University, Seoul 04763, Republic of Korea
2
Department of Ophthalmology, Hanyang University Seoul Hospital, Seoul 04763, Republic of Korea
3
Department of Electrical Engineering, Hanyang University, Seoul 04763, Republic of Korea
4
Department of Electrical and Computer Engineering, College of Liberal Studies, Seoul National University, Seoul 08826, Republic of Korea
5
Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Republic of Korea
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 22 February 2024 / Revised: 15 March 2024 / Accepted: 16 March 2024 / Published: 27 March 2024
(This article belongs to the Special Issue Glaucoma: New Diagnostic and Therapeutic Approaches)

Abstract

:
This study evaluated the utility of incorporating deep learning into the relatively novel imaging technique of wide-field optical coherence tomography angiography (WF-OCTA) for glaucoma diagnosis. To overcome the challenge of limited data associated with this emerging imaging, the application of few-shot learning (FSL) was explored, and the advantages observed during its implementation were examined. A total of 195 eyes, comprising 82 normal controls and 113 patients with glaucoma, were examined in this study. The system was trained using FSL instead of traditional supervised learning. Model training can be presented in two distinct ways. Glaucoma feature detection was performed using ResNet18 as a feature extractor. To implement FSL, the ProtoNet algorithm was utilized to perform task-independent classification. Using this trained model, the performance of WF-OCTA through the FSL technique was evaluated. We trained the WF-OCTA validation method with 10 normal and 10 glaucoma images and subsequently examined the glaucoma detection effectiveness. FSL using the WF-OCTA image achieved an area under the receiver operating characteristic curve (AUC) of 0.93 (95% confidence interval (CI): 0.912–0.954) and an accuracy of 81%. In contrast, supervised learning using WF-OCTA images produced worse results than FSL, with an AUC of 0.80 (95% CI: 0.778–0.823) and an accuracy of 50% (p-values < 0.05). Furthermore, the FSL method using WF-OCTA images demonstrated improvement over the conventional OCT parameter-based results (all p-values < 0.05). This study demonstrated the effectiveness of applying deep learning to WF-OCTA for glaucoma diagnosis, highlighting the potential of WF-OCTA images in glaucoma diagnostics. Additionally, it showed that FSL could overcome the limitations associated with a small dataset and is expected to be applicable in various clinical settings.

1. Introduction

Glaucoma refers to a disease that involves specific morphologic changes in the optic nerve resulting in functional changes in the visual field due to loss of the retinal nerve fiber layer (RNFL) [1,2]. Disc photography, optical coherence tomography (OCT) [3,4,5,6,7,8,9], and OCT angiography (OCTA) are among the various imaging devices used for diagnosing glaucoma. The diagnostic data are presented as images or numerical values, depending on the instrument used. Among these techniques, OCTA is a non-invasive imaging method that assesses the vasculature of the retina and optic nerve without the need for dye injection [10,11]. Changes in vessel density in OCTA align with functional and structural alterations detected through visual field exams and OCT scans, providing good consistency and effectively distinguishing between the glaucomatous and the normal eyes.
Wide-field OCTA (WF-OCTA), which overcomes the limited field of view in traditional OCTA, is emerging as one of the new diagnostic imaging approaches for retinal disease and glaucoma [12,13,14,15,16]. WF-OCTA’s scanning capabilities have been improved with technical advancements, such as swept-source OCT (SS-OCT), now allowing the examination of large areas of the posterior pole, encompassing both the optic nerve head and macula. Notably, when examining pathologic eyes with structural distortion of the optic disc, such as high myopia or retinal diseases, including epiretinal membrane and peripapillary retinoschisis, errors may occur in measuring conventional RNFL thickness maps. Additionally, WF-OCTA displays broader angiographic data in comparison to conventional imaging. This could potentially enhance the accuracy of glaucoma diagnosis, especially when other pathological alterations in the eyes complicate the process.
This study evaluates the accuracy of a deep-learning (DL) algorithm using WF-OCTA for identifying glaucoma. DL image classification is being assessed as a pre-diagnostic tool before human diagnosis. Sufficient data are crucial for effectively training DL networks for image classification in medical imaging diagnosis. Insufficient data can result in issues like overfitting and underfitting. Collecting sufficient medical data for training is a challenge due to limited data availability and privacy concerns. Furthermore, the clinical stage of WF-OCTA—the technology utilized in this study—creates difficulties in obtaining adequate data.
In recent years, few-shot learning (FSL) has emerged as a promising approach in DL, particularly in scenarios where limited annotated data are available. Unlike traditional supervised learning methods, which rely on large, labeled datasets for training, FSL enables models to generalize to new tasks with only a small amount of annotated data, mimicking human learning processes with limited examples [17]. The relationship between dataset size and accuracy in machine learning, including FSL, is complex. While larger datasets typically offer more diverse examples for model training, several factors impact this relationship. High-quality, well-annotated data are crucial for training accurate models, and task complexity and model architecture also influence performance [18]. Imbalanced data distributions and the use of regularization techniques further shape the interplay between dataset size and accuracy [19]. In the context of FSL, dataset size plays a crucial role in model performance. Although FSL techniques can handle limited data scenarios, increasing the dataset size can significantly enhance performance, especially if the additional data includes rare cases or provides greater diversity [20]. It is essential to understand these dynamics to optimize model performance and effectively utilize available data resources.
In such situations, implementing the FSL [21,22,23] approach may be a way to overcome this challenge. FSL methodology permits machine learning from a small number of samples, usually less than 10. Therefore, this study assessed the diagnostic potential of WF-OCTA for detecting glaucoma using an FSL approach to overcome data scarcity.

2. Materials and Methods

This study’s protocol was approved by the Institutional Review Board (IRB) of Hanyang University Hospital, Seoul, Republic of Korea (IRB number: HYUH 2021-07-036). This study was designed in accordance with the tenets of the Declaration of Helsinki for biomedical research. The need for participant consent for retrospective data assessment was waived by the ethics committee.

2.1. Study Design and Participants

In this retrospective, comparative study, a total of 195 eyes were examined at Hanyang University Seoul Hospital Glaucoma Clinic between December 2021 and December 2022. Of these, 82 eyes were affected with glaucoma, and 113 controls were without glaucoma. All participants underwent WF-OCTA imaging with the same SS-OCT device (Topcon, DRI OCT Triton), and the glaucoma was diagnosed by a glaucoma specialist. Diagnosis of glaucoma and selection of the control group were performed similarly to that in previous studies (Supplementary Materials) [24,25]. To eliminate ambiguity, this study excluded patients with high myopia (sph < −6.0D), retinal diseases, and glaucoma suspect states without definite visual field impairment or RNFL defects.

2.2. WF-OCTA

The wide-field 12 × 12 mm OCTA scan generates an en-face image of retinal vessels through various segmented layers. The SS-WF-OCTA scans volumes centered on the retina within a 12 × 12 mm field of view at a scan rate of 100,000 A-scans per second, offering a lateral resolution of 20 um. The device’s built-in software corrects actual refraction to prevent refractometric degradation. The report of the WF-OCTA scan for the 12 × 12 area overlaps with the RNFL or ganglion cell–inner plexiform layer (GCIPL)/ganglion cell complex (GCC) thickness map on the WF-OCTA image.
Figure 1 displays (A) the OCT RNFL thickness map used in pre-training and the three types of WF-OCTA images in the FSL, (B) a combination of WF-OCTA and RNFL thickness map (Combi 1), (C) a combination of WF-OCTA and GCC thickness map (Combi 2), and (D) WF-OCTA itself (black and white). Apart from the WF-OCTA, this work evaluated the vessel density using optic disc OCTA (4.5 × 4.5 mm). The vessel density of the superficial capillary plexus (SCP) was assessed in four groups, superior, nasal, inferior, and temporal, to determine whether the vessel density had decreased.

2.3. DL Techniques: Image Classification on Medical Diagnosis

Medical image classification is pivotal in clinical treatment and early diagnosis. However, traditional methods have demonstrated limited performance and often require significant time and effort to identify and select features for classification. DL methods have surpassed the performance of some existing models and streamlined the design process through a data-driven approach. In particular, the supervised learning (SL) method using DL networks has achieved great success in various image classification tasks [26,27,28]. As the parameters of DL networks for image classification tend to be large, a large amount of training data is required to train networks. However, when it comes to medical image classification, the preparation of extensive training datasets demands costly and time-consuming manual annotations by medical professionals. Moreover, the distribution of medical data can be significantly imbalanced. Acquiring a vast amount of normal data might be easy; however, obtaining negative samples is challenging due to the rarity of certain disease cases. To address this issue, a training approach capable of accurately diagnosing diseases with a very limited amount of data was required. FSL aims to make predictions in situations where only a few examples are available for each class [29]. This study presents an effective glaucoma detection method leveraging the FSL method.
Data used in FSL can be broadly classified into three categories: training set, support set, and query set [30] (Figure 2). Additionally, the ‘class’ used in FSL refers to the group of objects that the model is trying to learn and distinguish from other classes. FSL initially learns how to distinguish between classes through the training set with a large amount of data. Then, through the support set, FSL learns from only a small amount of data for each class that is not present in the training set. When a query sample is input, the system compares which class of the support set is most similar to the answer.
ProtoNet [31] is an FSL method that uses a prototype that represents each class or concept as a central reference point in the feature space. During training, the model learns to create a prototype from a few labeled examples, using DL networks such as convolutional neural networks to extract meaningful features from the input data. The prototype is computed as the average feature vector of the supporting examples belonging to each class, and during the inference step, the model compares the query example to the prototype and assigns it to the class with the closest prototype. ProtoNet utilizes a metric learning objective function, enabling effective learning with limited data. This capability allows the model to differentiate between various classes and apply to new instances.

2.4. DL Techniques: Proposed Method

This study proposes a ProtoNet-based network for predicting WF-OCTA through DL. The ProtoNet-based network comprises a feature-extracting backbone network and a true/false classifying clustering module. The backbone network varies based on the size and type of data in both networks. Moreover, ResNet18 was deployed as our backbone network [27].
ResNet18 is trained on a very large dataset, such as ImageNet [32] (1.2 million images stored in 1000 categories), using a pre-trained model [33]. We can utilize the pre-trained model as an initialization or a fixed feature extractor and employ the transfer learning method [34]. Transfer learning is a DL method in which a model trained for one task is repurposed for a related second task. We utilized the weight of the backbone network by applying this fine-tuning of the transfer learning method with SS-OCT images [24,25] using ResNet18 pre-trained with ImageNet (Figure S1). For pre-training, the SS-OCT RNFL thickness map (12 × 9 mm) of glaucoma and normal groups was used (Figure 1). The data from the patient group in our existing study (Journal of Glaucoma) was used [25], and a total of 417 eyes with glaucoma and 258 normal eyes were included.
From this point forward, this discussion focuses on the process of acquiring knowledge about ProtoNet’s clustering module through training and testing. To train the clustering module, Mini-ImageNet [35], a modified version of ImageNet for FSL, was utilized. Training with WF-OCTA introduces a potential risk of overfitting, as the model may become too specialized in WF-OCTA data. As each patient demonstrates different glaucoma symptoms in the data, overfitting the training data increases the risk of incorrect predictions for new patients. To mitigate this, we trained ProtoNet on Mini-ImageNet to establish a general understanding of the difference between similarity and dissimilarity.
Finally, the network’s performance on WF-OCTA data was evaluated in the test or inference phase. We trained the neural network on WF-OCTA data from 10 patients with glaucoma and 10 normal individuals, forming a support set. Few-shot training, in this case, refers to the adaptation of a pre-existing network to a new task with minimal input, specifically for glaucoma. Subsequently, we supplemented the set with 100 unseen data points, referred to as the query set, and evaluated whether each data point represented glaucoma or not. The schematic diagram is presented in Figure 3.

2.5. Statistical Analysis

This study compared the diagnostic performance between FSL conducted with pre-training on different images and Mini-ImageNet and conventional SL using a limited number of WF-OCTA images. Furthermore, this study extended the comparison to evaluate the diagnostic capabilities of numerical values (parameters) commonly used in traditional OCT and OCTA.
To assess the diagnostic capability for detecting the presence or absence of glaucoma, we computed the area under the receiver operating characteristic curve (AUC) and accuracy. Additionally, AUC with a 95% confidence interval (95% CI) was employed while varying the cutoff value for the probability of glaucoma. The method described by DeLong et al. [36] was utilized to compare AUC values among different parameters. Accuracy served as a metric for the precision in classifying the stages of glaucoma. The proportion of correctly classified data from the entire dataset used for testing was also estimated. p-values < 0.05 were considered statistically significant. Values are presented as mean ± standard deviation. Statistical tests were conducted using SPSS version 24 (IBM Inc., Armonk, NY, USA), MedCalc Version 19.1.3 (MedCalc Software, Ostend, Belgium), and the PyTorch Version 1.12.0 in Python (Facebook AI Research Lab, Menlo Park, CA, USA) [37].

3. Results

Demographics and ocular characteristics of the support set and query set are summarized in Table S1. The median age was 58.4 ± 15.8. No statistically significant differences were observed in spherical equivalent, axial length, and intraocular pressure, regardless of the presence of glaucoma. Both glaucoma and control groups were evenly composed with similar numerical values of other ocular characteristics such as MD (dB), VFI (%), RNFL, GCIPL, and GCC thickness (μm).
In the experiment, 20 WF-OCTA images, consisting of 10 glaucoma and 10 normal datasets, were examined. Additionally, 100 WF-OCTA images of 50 glaucoma and 50 normal datasets were classified in FSL experiments with 1, 2, 5, and 10 shots by default. Shot refers to the number of data points used to adapt training to a new task. For example, if it is one shot, it means that the model only looks at one glaucoma data and one normal data and fits 100 data. Table S2 demonstrates the WF-OCTA image accuracy value and the AUC. The results clearly display that as the number of shots increases, so does the accuracy. The comparison was based on 10 shots.
We were interested in the feasibility of using SL to predict WF-OCTA even with a limited amount of data. To investigate this point, we conducted a performance verification and comparison between the existing SL method using ResNet18 and the proposed method with the WF-OCTA data. For SL, we trained with a total of 20 WF-OCTA data and tested with 100 WF-OCTA data. Therefore, Figure 4 and Table 1 demonstrate that SL does not learn with an accuracy of 50%. Additionally, the accuracy and the AUC value are high for FSL, and the p-value is set at 0.05 for all results, which is significant.
This study also compared our method with existing methods based on peripapillary RNFL, macular GCIPL, and macular GCC thickness values, which are widely used in the glaucoma field.
As displayed in Table 2, the AUC values demonstrate that the performance of WF-OCTA Combis adopting FSL is significantly higher than those of thickness values, and the respective p-values are significant at 0.05 or less.

4. Discussion

In this study, the effectiveness of WF-OCTA as an image diagnostic modality for glaucoma diagnosis is investigated using DL algorithms, and an FSL method is also introduced to overcome the difficulties of data collection for WF-OCTA. Although many previous studies have applied artificial intelligence (AI) to images such as OCT and SS-OCT to diagnose glaucoma, no studies have been conducted on WF-OCTA images. Therefore, the application of DL to WF-OCTA images is the first of its kind.
In the field of ophthalmology, various attempts have been made to use small-shot learning as an automatic diagnostic evaluation method for images. Quellec et al. applied FSL to the detection of rare conditions such as papilledema or anterior ischemic optic neuropathy from the OPHDIAT diabetic retinopathy screening program [38]. Kim et al. introduced a novel approach for the development of an effective computational model for early diagnosis of glaucoma, relying solely on a single type of image (high-resolution fundus images) using FSL, employing a matching network older than ProtoNet [39]. Han et al. used FSL for ophthalmic disease screening, focusing primarily on enhancing data by fusion or aggregation of various data types rather than relying on a small amount of data [40].
First of all, comparing FSL and SL, the diagnostic power of FSL demonstrated better performance. The reason SL does not perform is because of an error called underfitting that occurs due to the small amount of image data (like WF-OCTA). This means that to examine the effectiveness of using AI in this situation, FSL, rather than the existing methodology SL, must be used as another methodology that can improve performance. This does not serve as a comparison to establish the superiority of SL over FSL; instead, it underscores the importance of FSL as a fitting methodology in specific circumstances. FSL was utilized to construct models by incorporating various data sources, including open benchmark data, to verify its effectiveness for WF-OCTA data.
This study conducted experiments on the FSL method utilizing an algorithm called ProtoNet, which transforms data into prototypes and clusters them, making them suitable for displaying particularity distributions. To adjust the ProtoNet for the FSL algorithm for this study, we identified two stages. The first step involves extracting features, which capture critical image elements to be evaluated, followed by classification, which determines whether results are positive or negative. The feature extractor was trained using RNFL and the transfer learning method [41], which utilized an existing ophthalmic medical image. This resulted in an effective feature extractor for ophthalmic medical images. To reduce potential task dependence, the model was trained on ImageNet during the classification process. Additionally, the use of distinct datasets for training both sections of the network aimed to maximize its benefits.
The FSL method employed in this study has the potential to significantly impact real-world medical applications. We think this is the result of improving FSL’s capabilities through the aforementioned efforts. FSL can be useful in cases where there are not enough images available for training due to the rarity of the disease or the release of new image types in the future. As new images will continue to be introduced with the development of technology, incorporating FSL will help research and evaluate the diagnostic power at the time of introduction of such images.
This study also compared the FSL of WF-OCTA with existing methods based on peripapillary RNFL, macular GCIPL, and macular GCC thickness values, which are widely used parameters in glaucoma. Application of WF-OCTA data to FSL, especially in WF-OCTA Combi data, demonstrated that this method surpasses conventional numeric parameter-based diagnosis. Nevertheless, WF-OCTA (not combi, alone, grayscale) exhibits decreased performance compared to the other two images. The reason for this discrepancy lies in the fact that the RNFL image used for feature training is in RGB format, while the other two images are in RGB format, and WF-OCTA is a grayscale image, thus degrading the feature extraction performance due to different channels. To enhance the efficiency of WF-OCTA (grayscale), acquire enough grayscale glaucoma images to train the feature extractor or use additional shots to increase the accuracy of the network. We expect that increasing the number of attempts beyond 10 may potentially enhance the algorithm’s performance in future research.
WF-OCTA offers several distinct advantages in detecting RNFL defects for glaucoma diagnosis [42]. First, WF-OCTA visualizes a wider area (12 × 12 mm) compared to the existing OCT RNFL thickness map (12 × 9 mm). This may also be useful in assessing peripheral regions and detecting abnormalities that can be missed in conventional imaging. Second, visualization of blood flow dynamics improves the accuracy of glaucoma diagnosis in cases with other ocular pathologies where the existing thickness map is compromised by retinal diseases such as ERM or peripapillary retinoschisis. Particularly in cases of high myopia, where RNFL defects may not be clearly observed in the red-free fundus photo, WF-OCTA can be helpful [15,43]. By providing angiographic information across a broader field than conventional imaging, WF-OCTA has the potential to enhance glaucoma diagnosis.
However, the current active clinical utilization of WF-OCTA can be limited due to some disadvantages. WF-OCTA takes a long time to acquire, requires cooperation, and can make imaging difficult for older patients, especially those with tremors. Moreover, deviation maps cannot be created, and the current embedded software only offers a combination map with RNLF, GCC, and GCIPL thickness maps since the normative database of OCTA values has not been established yet. These may make the active clinical utilization of WF-OCTA challenging.
Nevertheless, when diagnosing glaucoma becomes challenging due to other pathological changes in the eye, such as high myopia or retinal diseases, WF-OCTA can serve as a valuable adjunctive imaging technique. This study’s findings demonstrate that WF-OCTA could offer a valuable alternative to the current RNFL method for diagnosing glaucoma, especially in patients with co-existing ocular conditions. Therefore, we believe that it can be readily applied in clinical settings, particularly in cooperative patients who can endure relatively long examination times. With the assistance of FSL, the accuracy of WF-OCTA for glaucoma diagnosis could further improve, leading to significant advancements in the diagnosis and treatment of glaucoma. Moreover, FSL could be clinically applicable not only in glaucoma management but also in relatively less prevalent conditions like inherited retinal diseases or neuro-ophthalmic diseases.
This study has certain limitations. First, the advantages of FSL could have been demonstrated more accurately if we had used a wide fundus photo or OCT RNFL thickness map instead of WF-OCTA. However, this study tried to demonstrate the advantages of both FSL (new DL algorithm) and WF-OCTA (newly introduced imaging method). Second, 100 OCT RNFL thickness map images were used for feature extraction, which is still too small for pre-training. Third, it would have been more intriguing if we could compare the accuracy between diagnosis by the physician and the FSL. Fourth, OCT RNFL thickness maps were used for the training set, which are very similar to the images used in the support set (WF-OCTA). Our future studies plan to address the accuracy when using images with lower similarity as the training set.

5. Conclusions

This study demonstrated the effectiveness of applying DL to WF-OCTA for glaucoma diagnosis, highlighting the potential of WF-OCTA images in glaucoma diagnostics. Additionally, the application of FSL was shown to overcome the limitation of small dataset size. Utilizing FSL with imaging techniques characterized by limited data can be effective, and its applicability in various clinical settings is anticipated.

Supplementary Materials

The following supporting information can be downloaded at https://0-www-mdpi-com.brum.beds.ac.uk/article/10.3390/biomedicines12040741/s1. Supplementary Material: Diagnosis of glaucoma and selection of control group. Figure S1: Training ResNet18 weight by using SS-OCT data to train ResNet18. Table S1: Demographic and clinical characteristics of eyes and patients in the training vs. test samples. It is written that there are 95 sheets in the training overall, but in the actual experiment, 20 sheets are randomly selected and used. Table S2: Comparison of shot, accuracy, and area under the receiver operating characteristic curve results as a function of number of shots.

Author Contributions

Conceptualization, J.W.C. and W.J.L.; methodology, W.J.L.; software, J.W.C.; validation, Y.S., K.O.Y., J.M.L. and I.Y.Y.; formal analysis, Y.S., K.O.Y. and W.J.L.; investigation, Y.S., K.O.Y. and W.J.L.; resources, W.J.L.; data curation, Y.S., K.O.Y. and W.J.L.; writing—original draft preparation, Y.S., K.O.Y. and J.M.L.; writing—review and editing, W.J.L.; visualization, Y.S., K.O.Y. and J.M.L.; supervision, J.W.C. and W.J.L.; project administration, W.J.L.; funding acquisition, W.J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Bio and Medical Technology Development Program of the National Research Foundation (NRF), funded by the Korean Government (MSIT) (No. NRF-2022R1A2C1092176).

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Hanyang University Hospital (IRB number: HYUH 2021-07-036, date of approval: 2021.07.27).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

Data are available upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Weinreb, R.N.; Aung, T.; Medeiros, F.A. The pathophysiology and treatment of glaucoma: A review. JAMA 2014, 311, 1901–1911. [Google Scholar] [CrossRef] [PubMed]
  2. Kwon, Y.H.; Fingert, J.H.; Kuehn, M.H.; Alward, W.L. Primary open-angle glaucoma. N. Engl. J. Med. 2009, 360, 1113–1124. [Google Scholar] [CrossRef] [PubMed]
  3. Bussel, I.I.; Wollstein, G.; Schuman, J.S. OCT for glaucoma diagnosis, screening and detection of glaucoma progression. Br. J. Ophthalmol. 2014, 98 (Suppl. S2), ii15–ii19. [Google Scholar] [CrossRef] [PubMed]
  4. Grewal, D.S.; Tanna, A.P. Diagnosis of glaucoma and detection of glaucoma progression using spectral domain optical coherence tomography. Curr. Opin. Ophthalmol. 2013, 24, 150–161. [Google Scholar] [CrossRef]
  5. Vessani, R.M.; Moritz, R.; Batis, L.; Zagui, R.B.; Bernardoni, S.; Susanna, R. Comparison of quantitative imaging devices and subjective optic nerve head assessment by general ophthalmologists to differentiate normal from glaucomatous eyes. J. Glaucoma 2009, 18, 253–261. [Google Scholar] [CrossRef]
  6. Sung, K.R.; Kim, J.S.; Wollstein, G.; Folio, L.; Kook, M.S.; Schuman, J.S. Imaging of the retinal nerve fibre layer with spectral domain optical coherence tomography for glaucoma diagnosis. Br. J. Ophthalmol. 2011, 95, 909–914. [Google Scholar] [CrossRef]
  7. Mwanza, J.-C.; Warren, J.L.; Budenz, D.L. Combining spectral domain optical coherence tomography structural parameters for the diagnosis of glaucoma with early visual field loss. Investig. Ophthalmol. Vis. Sci. 2013, 54, 8393–8400. [Google Scholar] [CrossRef]
  8. Lisboa, R.; Mansouri, K.; Zangwill, L.M.; Weinreb, R.N.; Medeiros, F.A. Likelihood ratios for glaucoma diagnosis using spectral-domain optical coherence tomography. Am. J. Ophthalmol. 2013, 156, 918–926.e2. [Google Scholar] [CrossRef]
  9. Greaney, M.J.; Hoffman, D.C.; Garway-Heath, D.F.; Nakla, M.; Coleman, A.L.; Caprioli, J. Comparison of optic nerve imaging methods to distinguish normal eyes from those with glaucoma. Investig. Ophthalmol. Vis. Sci. 2002, 43, 140–145. [Google Scholar]
  10. Rao, H.L.; Pradhan, Z.S.; Suh, M.H.; Moghimi, S.; Mansouri, K.; Weinreb, R.N. Optical Coherence Tomography Angiography in Glaucoma. J. Glaucoma 2020, 29, 312–321. [Google Scholar] [CrossRef] [PubMed]
  11. Werner, A.C.; Shen, L.Q. A Review of OCT Angiography in Glaucoma. Semin. Ophthalmol. 2019, 34, 279–286. [Google Scholar] [CrossRef] [PubMed]
  12. Grewal, D.S.; Agarwal, M.; Munk, M.R. Wide Field Optical Coherence Tomography and Optical Coherence Tomography Angiography in Uveitis. Ocul. Immunol. Inflamm. 2022, 32, 105–115. [Google Scholar] [CrossRef] [PubMed]
  13. Hamada, M.; Hirai, K.; Wakabayashi, T.; Ishida, Y.; Fukushima, M.; Kamei, M.; Tsuboi, K. Real-world utility of wide-field OCT angiography to detect retinal neovascularization in eyes with proliferative diabetic retinopathy. Ophthalmol. Retina, 2023; in press. [Google Scholar] [CrossRef]
  14. Hirano, T.; Hoshiyama, K.; Takahashi, Y.; Murata, T. Wide-field swept-source OCT angiography (23 × 20 mm) for detecting retinal neovascularization in eyes with proliferative diabetic retinopathy. Graefes Arch. Clin. Exp. Ophthalmol. 2023, 261, 339–344. [Google Scholar] [CrossRef] [PubMed]
  15. Kim, Y.J.; Na, K.I.; Lim, H.W.; Seong, M.; Lee, W.J. Combined wide-field optical coherence tomography angiography density map for high myopic glaucoma detection. Sci. Rep. 2021, 11, 22034. [Google Scholar] [CrossRef] [PubMed]
  16. Munsell, M.K.; Garg, I.; Duich, M.; Zeng, R.; Baldwin, G.; Wescott, H.E.; Koch, T.; Wang, K.L.; Patel, N.A.; Miller, J.B. A normative database of wide-field swept-source optical coherence tomography angiography quantitative metrics in a large cohort of healthy adults. Graefes Arch. Clin. Exp. Ophthalmol. 2023, 261, 1835–1859. [Google Scholar] [CrossRef] [PubMed]
  17. Lake, B.M.; Salakhutdinov, R.; Tenenbaum, J.B. Human-level concept learning through probabilistic program induction. Science 2015, 350, 1332–1338. [Google Scholar] [CrossRef] [PubMed]
  18. Bishop, C.M.; Nasrabadi, N.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006; Volume 4, p. 738. [Google Scholar]
  19. He, H.; Garcia, E.A. Learning from imbalanced data. IEEE Trans. Knowl. Data Eng. 2009, 21, 1263–1284. [Google Scholar]
  20. Chen, W.Y.; Liu, Y.C.; Kira, Z.; Wang YC, F.; Huang, J.B. A closer look at few-shot classification. arXiv 2019, arXiv:1904.04232. [Google Scholar]
  21. Miller, E.G.; Matsakis, N.E.; Viola, P.A. Learning from one example through shared densities on transforms. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2000 (Cat. No. PR00662), Hilton Head, SC, USA, 13–15 June 2000; pp. 464–471. [Google Scholar]
  22. Lake, B.; Salakhutdinov, R.; Gross, J.; Tenenbaum, J. One shot learning of simple visual concepts. In Proceedings of the Annual Meeting of the Cognitive Science Society, Boston, MA, USA, 20–23 July 2011. [Google Scholar]
  23. Koch, G.; Zemel, R.; Salakhutdinov, R. Siamese Neural Networks for One-Shot Image Recognition; ICML Deep Learning Workshop: Lille, France, 2015. [Google Scholar]
  24. Shin, Y.; Cho, H.; Shin, Y.U.; Seong, M.; Choi, J.W.; Lee, W.J. Comparison between deep-learning-based ultra-wide-field fundus imaging and true-colour confocal scanning for diagnosing glaucoma. J. Clin. Med. 2022, 11, 3168. [Google Scholar] [CrossRef]
  25. Shin, Y.; Cho, H.; Jeong, H.C.; Seong, M.; Choi, J.W.; Lee, W.J. Deep Learning-based Diagnosis of Glaucoma Using Wide-field Optical Coherence Tomography Images. J. Glaucoma 2021, 30, 803–812. [Google Scholar] [CrossRef]
  26. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  27. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–July 1 2016; pp. 770–778. [Google Scholar]
  28. Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
  29. Wang, Y.; Yao, Q.; Kwok, J.T.; Ni, L.M. Generalizing from a few examples: A survey on few-shot learning. ACM Comput. Surv. 2020, 53, 1–34. [Google Scholar] [CrossRef]
  30. Song, Y.; Wang, T.; Cai, P.; Mondal, S.K.; Sahoo, J.P. A comprehensive survey of few-shot learning: Evolution, applications, challenges, and opportunities. ACM Comput. Surv. 2023, 55, 1–40. [Google Scholar] [CrossRef]
  31. Snell, J.; Swersky, K.; Zemel, R. Prototypical networks for few-shot learning. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
  32. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  33. Han, X.; Zhang, Z.; Ding, N.; Gu, Y.; Liu, X.; Huo, Y.; Qiu, J.; Yao, Y.; Zhang, A.; Zhang, L.; et al. Pre-trained models: Past, present and future. AI Open 2021, 2, 225–250. [Google Scholar] [CrossRef]
  34. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  35. Vinyals, O.; Blundell, C.; Lillicrap, T.; Wierstra, D. Matching networks for one shot learning. Adv. Neural Inf. Process. Syst. 2016, 29. [Google Scholar]
  36. DeLong, E.R.; DeLong, D.M.; Clarke-Pearson, D.L. Comparing the areas under two or more correlated receiver operating characteristic curves: A nonparametric approach. Biometrics 1988, 44, 837–845. [Google Scholar] [CrossRef]
  37. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic Differentiation in Pytorch. 2017. Available online: https://openreview.net/pdf/25b8eee6c373d48b84e5e9c6e10e7cbbbce4ac73.pdf?ref=blog.premai.io (accessed on 29 October 2017).
  38. Quellec, G.; Lamard, M.; Conze, P.-H.; Massin, P.; Cochener, B. Automatic detection of rare pathologies in fundus photographs using few-shot learning. Med. Image Anal. 2020, 61, 101660. [Google Scholar] [CrossRef]
  39. Kim, M.; Zuallaert, J.; De Neve, W. Few-shot learning using a small-sized dataset of high- resolution FUNDUS images for glaucoma diagnosis. In Proceedings of the 2nd International Workshop on Multimedia for Personal Health and Health Care, Mountain View, CA, USA, 23 October 2017; pp. 89–92. [Google Scholar]
  40. Han, Z.K.; Xing, H.; Yang, B.; Hong, C.Y. A few-shot learning-based eye diseases screening method. Eur. Rev. Med. Pharmacol. Sci. 2022, 26, 8660–8674. [Google Scholar] [CrossRef]
  41. Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A comprehensive survey on transfer learning. Proc. IEEE 2020, 109, 43–76. [Google Scholar] [CrossRef]
  42. Hong, R.K.; Kim, J.H.; Toh, G.; Na, K.I.; Seong, M.; Lee, W.J. Diagnostic performance of wide-field optical coherence tomography angiography for high myopic glaucoma. Sci. Rep. 2024, 14, 367. [Google Scholar] [CrossRef]
  43. Kim, H.; Park, H.M.; Jeong, H.C.; Moon, S.Y.; Cho, H.; Lim, H.W.; Seong, M.; Park, J.; Lee, W.J. Wide-field optical coherence tomography deviation map for early glaucoma detection. Br. J. Ophthalmol. 2023, 107, 49–55. [Google Scholar] [CrossRef]
Figure 1. Dataset examples for pre-training and FSL: (A) SS-OCT RNFL thickness map (12 × 9 mm) used in pretraining, (B) WF-OCTA-RNFL Combi, (C) WF-OCTA-GCC Combi, and (D) WF-OCTA used for FSL.
Figure 1. Dataset examples for pre-training and FSL: (A) SS-OCT RNFL thickness map (12 × 9 mm) used in pretraining, (B) WF-OCTA-RNFL Combi, (C) WF-OCTA-GCC Combi, and (D) WF-OCTA used for FSL.
Biomedicines 12 00741 g001
Figure 2. Categorization of datasets in FSL. The training set is used to train a deep-learning network to extract task-specific features from images. The support set is a small set of data points (e.g., 1, 2, 5, 10, etc.) used for training an FSL model for a new task. The query sample is data to evaluate the effectiveness of a network trained on a limited dataset.
Figure 2. Categorization of datasets in FSL. The training set is used to train a deep-learning network to extract task-specific features from images. The support set is a small set of data points (e.g., 1, 2, 5, 10, etc.) used for training an FSL model for a new task. The query sample is data to evaluate the effectiveness of a network trained on a limited dataset.
Biomedicines 12 00741 g002
Figure 3. Implementation of FSL for WF-OCTA integration. This study employed FSL to integrate WF-OCTA into an AI algorithm. ResNet18 served as the backbone network, trained on ophthalmic images to capture relevant features. The ProtoNet algorithm facilitated feature clustering and classification. Training utilized the Mini-ImageNet benchmark dataset. The WF-OCTA data were split into support and query sets for validation.
Figure 3. Implementation of FSL for WF-OCTA integration. This study employed FSL to integrate WF-OCTA into an AI algorithm. ResNet18 served as the backbone network, trained on ophthalmic images to capture relevant features. The ProtoNet algorithm facilitated feature clustering and classification. Training utilized the Mini-ImageNet benchmark dataset. The WF-OCTA data were split into support and query sets for validation.
Biomedicines 12 00741 g003
Figure 4. Receiver operating curves for glaucoma diagnostic methods. Left is FSL vs. SL with WF-OCTA images. FSL shows a higher AUC. The right is a classification of glaucoma vs. normal cases. FSL WF-OCTA RNFL Combi and FSL WF-OCTA GCC Combi achieve the highest AUC (0.930 and 0.881). These outperform conventional thickness values (RNFL, GCC, and GCIPL thickness AUC: 0.870, 0.863, and 0.782).
Figure 4. Receiver operating curves for glaucoma diagnostic methods. Left is FSL vs. SL with WF-OCTA images. FSL shows a higher AUC. The right is a classification of glaucoma vs. normal cases. FSL WF-OCTA RNFL Combi and FSL WF-OCTA GCC Combi achieve the highest AUC (0.930 and 0.881). These outperform conventional thickness values (RNFL, GCC, and GCIPL thickness AUC: 0.870, 0.863, and 0.782).
Biomedicines 12 00741 g004
Table 1. Comparison of accuracy and area under the receiver operating characteristic curve between few-shot learning and supervised learning.
Table 1. Comparison of accuracy and area under the receiver operating characteristic curve between few-shot learning and supervised learning.
FSLSLFSL vs. SL: p-Value
WF-OCTARNFL CombiGCC CombiAloneRNFL CombiGCC CombiAloneRNFL CombiGCC CombiAlone
Accuracy (%)818068505050<0.05<0.05<0.05
AUC0.9300.8810.7010.8020.7990.640<0.05<0.05<0.05
FSL = few-shot learning; SL = supervised learning; WF-OCTA = wide-field optical coherence tomography angiography; AUC = area under the receiver operating characteristic curve; RNFL = retinal nerve fiber layer; GCC = ganglion cell complex.
Table 2. Comparison of accuracy and area under the receiver operating characteristic curve between few-shot learning and conventional thickness value (p-values are expressed in the table below).
Table 2. Comparison of accuracy and area under the receiver operating characteristic curve between few-shot learning and conventional thickness value (p-values are expressed in the table below).
FSLThickness Value
WF OCTA_
RNFL Combi
WF OCTA_
GCC Combi
WF-OCTARNFLGCCGCIPL
AUC0.9300.8810.7010.8700.8630.782
FSLWF OCTA_RNFL CombiNA<0.05<0.05<0.05<0.05<0.05
WF OCTA_GCC Combi<0.05NA<0.05<0.05<0.05<0.05
WF-OCTA<0.05<0.05NA<0.05<0.05<0.05
Thickness ValueRNFL<0.05<0.05<0.05NA0.830.49
GCC<0.05<0.05<0.050.83NA0.37
GCIPL<0.05<0.05<0.050.490.37NA
FSL = few-shot learning; AUC = area under the receiver operating characteristic curve; WF-OCTA = wide-field optical coherence tomography angiography; RNFL = retinal nerve fiber layer; GCC = ganglion cell complex; GCIPL = ganglion cell–inner plexiform layer.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, K.O.; Lee, J.M.; Shin, Y.; Yoon, I.Y.; Choi, J.W.; Lee, W.J. Diagnosis of Glaucoma Based on Few-Shot Learning with Wide-Field Optical Coherence Tomography Angiography. Biomedicines 2024, 12, 741. https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines12040741

AMA Style

Yang KO, Lee JM, Shin Y, Yoon IY, Choi JW, Lee WJ. Diagnosis of Glaucoma Based on Few-Shot Learning with Wide-Field Optical Coherence Tomography Angiography. Biomedicines. 2024; 12(4):741. https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines12040741

Chicago/Turabian Style

Yang, Kyoung Ok, Jung Min Lee, Younji Shin, In Young Yoon, Jun Won Choi, and Won June Lee. 2024. "Diagnosis of Glaucoma Based on Few-Shot Learning with Wide-Field Optical Coherence Tomography Angiography" Biomedicines 12, no. 4: 741. https://0-doi-org.brum.beds.ac.uk/10.3390/biomedicines12040741

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop