Next Article in Journal
Dispersion-Managed Tm-Ho Co-Doped Ultrashort Pulse Fiber Laser Using Single-Walled Carbon Nanotube and Spectral Filter
Next Article in Special Issue
Simultaneous Coded Plane-Wave Imaging Using an Advanced Ultrasound Forward Model
Previous Article in Journal
Special Issue on Thermodynamic and Exergy Analyses of Cooling, Power, and Energy Systems
Previous Article in Special Issue
A Projection-Based Augmented Reality System for Medical Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advanced Analysis of Electroretinograms Based on Wavelet Scalogram Processing

1
Engineering School of Information Technologies, Telecommunications and Control Systems, Ural Federal University Named after the First President of Russia B. N. Yeltsin, 620002 Yekatrinburg, Russia
2
Machine Learning and Data Analytics Lab, Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052 Erlangen, Germany
*
Author to whom correspondence should be addressed.
Submission received: 7 October 2022 / Revised: 25 November 2022 / Accepted: 30 November 2022 / Published: 2 December 2022
(This article belongs to the Special Issue Advanced Medical Signal Processing and Visualization)

Abstract

:

Featured Application

The results of this work will be used to develop a system to assist ophthalmologist doctor decisions.

Abstract

The electroretinography (ERG) is a diagnostic test that measures the electrical activity of the retina in response to a light stimulus. The current ERG signal analysis uses four components, namely amplitude, and the latency of a-wave and b-wave. Nowadays, the international electrophysiology community established the standard for electroretinography in 2008. However, in terms of signal analysis, there were no major changes. ERG analysis is still based on a four-component evaluation. The article describes the ERG database, including the classification of signals via the advanced analysis of electroretinograms based on wavelet scalogram processing. To implement an extended analysis of the ERG, the parameters extracted from the wavelet scalogram of the signal were obtained using digital image processing and machine learning methods. Specifically, the study focused on the preprocessing of wavelet scalogram as images, and the extraction of connected components and thier evaluation. As a machine learning method, a decision tree was selected as one that incorporated feature selection. The study results show that the proposed algorithm more accurately implements the classification of adult electroretinogram signals by 19%, and pediatric signals by 20%, in comparison with the classical features of ERG. The promising use of ERG is presented using differential diagnostics, which may also be used in preclinical toxicology and experimental modeling. The problem of developing methods for electrophysiological signals analysis in ophthalmology is associated with the complex morphological structures of electrophysiological signal components.

1. Introduction

Biomedical research is a multidisciplinary area of medicine, biology, informatics, and engineering. The results of the research in this area may provide for the development of new methods in the diagnostics and treatments of various diseases [1]. The present study describes the use of biomedical ophthalmic signals for the diagnosis of retinal diseases.
The non-invasive ocular test called electroretinogram (ERG) evaluates the retinal function by measuring the electrical responses of the retina cells that are generated by a light stimulus. ERG consists of several responses, namely, the photoreceptors in the outer retina, inner retinal layers, and the final output neurons [2]. ERG is mostly used for the diagnostic assessment of toxic retinopathies, diabetic retinopathies, hereditary diseases, etc.
The phenomenon of an electrical potential of a living entity’s eye under a light stimulus action was found by a Swedish physician Holmgren in 1865. This was the background for the scientific discovery of ERG. Further studies of Dewar, Einthroven, Waller, and Granit identified ERG signals components that were conditioned by physiological responses [3,4,5,6]. The modern concept of ERG analysis involves the measurements of a- and b-waves. An a-wave is characterized as being the first large negative component associated with the photoreceptor response in the outer retina. A b-wave describes a positive component that is associated with the inner layers response of the retina [7,8,9,10,11].
The ERG technique has a great potential for early disease detection, diagnosis, and interventions in the field of ophthalmology. Another popular technique is the optical coherence tomography (OCT) test. The ERG can be a useful addition for child patients and those who are cognitively or physically disabled, or who in some other way cannot participate. In addition, it can reflect the retinal lesion several years before visual symptomatology or structural damage are detected by OCT. In recent years, the ocular toxicity method has obtained an increased popularity in the drug therapy assessment of visual impairment, where the ERG technique has shown good prospects in the retinal toxicity test [12,13]. Manual ERG analysis is, however, highly dependent on the experience of the clinician, as a misdiagnosis might mean that the patient misses the optimal time for treatment. An automated algorithm will be a powerful tool in ERG signal analysis, but it requires large databases for verification and validation. The strong motivation to understand in more detail the ERG signals start at first glance with building a signal’s database, which includes adults and children. In this way, waveforms can be analyzed in terms of a physiological process and compared according to waveform morphology.
The novelty of the current study relates to the processing of pediatric signals, which, to the best of our knowledge, has a limited description in the scientific literature. Additionally, we propose an approach to wavelet feature extraction that improves upon the accuracy of both adult and pediatric diagnosis classification. The limitations of the work: The work provides initial study results on a rather limited database. In the next step, the database will be assumed to be exceeding. The paper includes the comparison of the classical existent approach using four features and our one, which extends the results leading to classification accuracy improvement. To extract the parameters, the cwt function of the PyWT library is used, where the Gaussian wavelet of the eighth order was chosen as the basic function.
The rest of the paper is organized as follows: Section 2 discusses the advantages and disadvantages of a variety of signal processing methods. Section 3 presents our ERG signals database, as well as a pipeline for signals processing and feature extraction using classical approach and wavelet transform. In Section 4, we present the statistical summary for the classical ERG features in accordance with the ISCEV standard [14]. Section 5 presents the results of feature processing using the decision tree machine learning method. Section 6 presents a discussion of the obtained results. Section 7 draws the conclusions and drafts future works.

2. Related Work

The ERG signal is a complex signal that shows the electrical activity of retinal cells after stimulation. It should be noted that the ERG is a short signal containing many spectral components. ERG is high frequency (2 kHz) and short (200 ms); therefore, the signal spectrum is in the range of 0 to 1 kHz, the spectrum sampling is 7.5 Hz, and useful information is distributed non-linearly and may appear in a narrower range. Currently, the ERG is evaluated using the amplitude and latency of the known waves in this signal.
The authors of article [15] described all the possible methods for investigating ERG signals that are found in the scientific literature, including a comparative analysis of the methods, their advantages, and disadvantages. In this paper, we focus on the time–frequency-specific methods for ERG data processing.
The current scientific literature presents studies of ERG analysis in the frequency domain, mainly based on the Fast Fourier transform (FFT), power spectral density (PSD), and linear prediction (LP) [16]. Scientific articles show a variety of research methods in the field of diagnosing diseases of the retina, but the results are difficult to generalize due to differences in ERG protocols. The methods presented in the articles demonstrate the accuracy of the analysis in the time domain. Some studies are focused on the frequency in domain analysis of the oscillatory potential, since this signal has a shorter amplitude than other ERG components.
Obviously, the disadvantage of the FFT method is the lack of temporal localization, which means that the power spectrum density cannot provide information about the specific frequencies of the signal. To solve this problem, Short-Time Fourier Transform (STFT) was proposed to analyze small sections of the signal using a window [17]. Thus, STFT served as the basis for time–frequency analysis.
It is known from studies that a smaller window size leads to an improved temporal resolution and reduces the number of discrete frequency components. Thus, the wavelet analysis (WA) provides full analysis potential for the discrimination of the ERG components.
The use of WA demonstrates some advantages over other frequency-domain methods because it has a different window size that is more appropriate for analyzing sudden and short-term signal changes. Most articles about ERG analysis in the time–frequency domain describe the use of continuous wavelet transform (CWT) and discrete wavelet transform (DWT) [18]. The first wavelet type requires more computation, resulting in a slower processing time compared to a discrete one. It is possible to lose any information if the correct level of decomposition is not chosen while using DWT. When analyzing WA, two important factors should be considered, namely the wavelet type and decomposition. Other papers that describe time–frequency domain analysis are also based on the DWT technique. Both the CWT and DWT methods have their specific advantages and disadvantages. CWT is more reliable than DWT because it can extract all of the available information without down sampling. However, CWT requires more computation, resulting in a slower process compared to DWT [19].
Moreover, CWT has the property of being highly redundant, which is beneficial from one point of view and a curse from another. Because of it, no information is lost, unlike DWT. In DWT, it is possible to lose some useful information if the correct level of decomposition is not chosen. When analyzing WA, two important factors should be considered; they are the wavelet type and decomposition. A careful evaluation of the feature extraction method and reference database is required in order to extract the information needed from the ERG signals using the above-mentioned methods.

3. Materials and Methods

3.1. Database

The research has used a database of electroretinogram signals, which includes five protocols of adult and pediatric electroretinogram signals [20,21]: Scotopic 2.0 ERG Response (53 pediatric signals and 23 adult signals), Maximum 2.0 ERG Response (80 pediatric signals and 42 adult signals), Photopic 2.0 ERG Response (74 pediatric signals and 32 adult signals), Photopic 2.0 EGR Flicker Response (63 pediatric signals and 38 adult signals), and Scotopic 2.0 ERG Oscillatory Potentials (20 signals). Electrophysiological studies were conducted in the IRTC Eye Microsurgery Ekaterinburg Center. The registration of electroretinogram signals was performed with a computerized electrophysiological workstation EP-1000, manufactured by Tomey GmbH (Nuremberg, Germany). All signals were recorded at the same place and using the very same data acquisition protocols.

3.2. Features of Electroretinogram Signals

The classical analysis of the ERG signal is based on the assessment of amplitudes a and b, and latency l a and l b in the a- and b-waves (Figure 1a). In addition to the analysis of classical parameters of the ERG signal, it is proposed to extract the parameters from the wavelet scalogram (Figure 1b). To obtain the coupled components of the ERG signal, it is necessary to perform the following processing of the wavelet scalogram (the corresponding illustrations are shown in in Figure 2):
  • Convert the wavelet scalogram values (Figure 2a) into 8-bit encoding format (a value range from 0 to 255).
  • Image binarization using the Otsu method (Figure 2b).
  • Image erosion with a 3 by 3 pixel kernel removes the local artifacts associated with digital signal processing. The erosion of the image allows enables pixels at the boundaries of the segments to be removed. (Figure 2c).
  • Determine the connected components of the wavelet scale using the connected components function from the OpenCV library (Figure 2d).
Mathematically, the connected components represent a set of numbers called Markers. These Markers are arrays of numbers that have the same size as the original image. The Markers carry the affiliation of each point of the wavelet scale chart with a particular segment. The wavelet scalogram was evaluated using the CWT (continuous wavelet transform) function of the PyWT libriary [22]. The image reprocessing, binarization, and separation in connected components were performed using the respective functions of the OpenCV library [23].
Table 1 describes the parameters extracted from the wavelet scalogram of ERG, divided into segments from 1 to 6 (Figure 1b).

3.3. Machine Learning Pipeline

The generalized pipeline of data processing is presented at Figure 4. A machine learning approach was used to predict the diagnosis of the subjects (healthy person or a person with pathology), using the features of the biomedical signals. It was relevant for us to identify the usefulness of the new wavelet scalogram features. Different groups of signals were considered separately due to it:
  • Classical features (amplitudes a and b, and the latencies l a and l b of the a- and b-waves) (CF);
  • Wavelet analysis features for the 1 and 2 segments (WA1–2);
  • Wavelet analysis features for the 1, 2, 3, and 4 segments (WA1–4);
  • Joined list of current features and wavelet analysis features for the 1–4 segments (CF+WA1–4).
The decision trees (DTs) were selected as a machine learning method to solve the classification task—predicting the target values (diagnosis) using the different features. The main reason for choosing DTs is their ability to select the most significant features for the task. In this paper, for the implementation of the decision trees, the training and validation of the respective functions of the scikit-leran library were used [24].
As DT tends to easily overfit the data, it is important to carefully tune the hyperparameters. In this research, the tuning of the following hyperparameters was considered, with max_depth (in the range of 1 , 2 , , 14 , 15 ) and min_samples_split (in the range of 3 , 6 , , 12 , 15 ).
For each of four different groups of signals, the most optimal hyperparameters were found with the grid search (GridSearch) using a 5-fold stratified K-fold cross-validation (StratifiedKFold). After the optimal hyperparameters were found, training and evaluating of the model were conducted as follows. First, we fit DT with the whole data. Then, we selected significant features using the feature_importances attribute, and only features with non-zero values of feature_importances were selected for subsequent analysis. Here, we considered feature_importances based on the reduction in Gini impurity. Afterwards, we performed cross-validation of the DT model using only the important features. The cross-validation was performed using StratifiedKFold implementation to ensure that each fold preserves the percentage of samples for each class. The number of splits was set to 5. The standard classification metrics were used—Accuracy, Precision, Recall, and F1-score. To obtain the final evaluation of the DT model, we averaged the metrics over all five validation folds. The standard deviation of the metrics over five validation folds was also considered. It is worth pointing out that the analyzed task does not assume a high computation efficiency rather than the accuracy of the final classification that was the aim of our research.

4. Statistical Analysis of the Electroretinogram Signal Database

Prior to using machine learning methods, the data were analyzed. The following measures were considered in order to prove the consistent results: it was ensured that each wavelet scalogram has all four correctly identified segments. If some of the segments were missed, such a signal was not considered in the final analysis. Mostly, such cases were related to subjects with pathology. However, it was necessary to conduct a more thorough analysis of whether all of the segments should be presented on the wavelet scalogram as a feature for express diagnosis. This analysis is out of the scope of this paper.

4.1. Pediatric Group

There were 65 signals used in the analysis of a pediatric group. We diagnosed 26 as being healthy, and 39 as subjects with pathology among them. Table 2 presents the obtained amplitude and the latency values for the a-waves and b-waves. According to the ISCEV Standard, the results table contains the median values and standard deviation (STD), to the 5th and 95th quantiles (Q5% and Q95%). The data are aggregated by the diagnosis. The boxplots of amplitude and latency values for the a-waves and b-waves are plotted in Figure 5a.
From the data written in Table 2 and Figure 5a, the following can be concluded: the amplitude values seem to be better indicators for distinguishing between the two diagnostic groups. The subjects with pathology tended to have lower values of amplitude. However, it is worthy to mention that there are “overlaps” that make an improbable diagnosis using a single feature.

4.2. Adults Group

For the adults group, 38 signals were used for the analysis. We diagnosed 11 as healthy and 27 as subjects with pathology among them. Table 3 presents the values of amplitudes and latencies for a and b waves. According to the ISCEV Standard we present median values, standard deviation (STD), 5-th and 95-th quantile (Q5% and Q95%). The data are aggregated by the diagnosis. Figure 5b demonstrates the boxplots of amplitudes and latencies for the a- and b-waves.
From the data presented in Table 2 and Table 3 for the pediatric and adult groups correspondingly and Figure 5b, the following can be concluded: only the b-wave amplitude seems to work as an indicator to distinguish between the two diagnostic groups. The subjects with pathology tend to have lower values of amplitude. In addition, one can mention the latency of the a-wave, as the subjects with pathology tend to have higher values. As given above, it is worthy to mention that there are “overlaps” that make an improbable diagnosis using a single feature.

5. Results

Table 4 presents the values of classification metrics for the pediatric group.
It is worthy to mention that with the above-described approach of fitting DT on the whole data to select significant features and to perform a cross-validation check afterwards, this might introduce data leakage. However, with the limited amount of data, it was not feasible to evaluate features importance on a subset of train data.

5.1. Pediatric Group

For the CF features set, the following feature was considered as being important: b.
For the WA1–2 features set, the following four features were considered as being important (the number denotes the segment number of the wavelet scalogram): B m a x 1 , A m e d i a n 1 , t 1 2 , f 1 90 1 .
For the WA1–4 features set, the following eight features were considered as being important: A m e d i a n 1 , A m e a n 1 , A m e d i a n 3 , A m e a n 3 , f 1 3 , t 4 , t 2 4 , t 2 90 4 .
For the CF+WA1–4 features set, the following four features were considered as being important: l a , A m e d i a n 3 , A m e a n 3 , t 4 .
Judging from the data in Table 4, it can be concluded that the proposed new WA features seem to provide more robust and consistent models compared to the Classical features used.
Analyzing the lists of important features, it is possible to conclude: the WA features of segments 1 and 2 seem to have a higher classification “power” than the WA features of segments 3 and 4. At the same time, the Classical Features seem to better complement the WA features of segments 3 and 4.
Table 5 shows the values of the classification metrics for the adults group.

5.2. Adults Group

For the CF features set, the following three features were considered as being important: a, b, and l b .
For the WA1–2 features set, the following two features were considered as being important: t 1 90 1 , and t 2 .
For the WA1–4 features set, the following two features were considered as being important: t 1 90 1 , t 2 90 2 .
For the CF+WA1–4 features set, the following five features were considered as being important: l b , t 1 , t 1 90 1 , t 2 , A m e d i a n 2 .
Judging from the data in Table 5, it can be concluded that the proposed new WA features seem to provide more robust and consistent models compared to the Classical features used. Analyzing the lists of important features, it is possible to conclude: the WA features of segment 1 and 2 seem to have the highest classification “power”. Interestingly, the joined features set has no WA features of segments 3 and 4.

6. Discussion

The proposed method is based on the use of a continuous wavelet transform. In comparison with Fast Fourier Transform, linear prediction, Windowed Fourier Transform, and other common methods, continuous wavelet transform provides a variable window size, a high resolution at low and high frequencies, and the ability to obtain detailed information during rapid frequency changes; it is also suitable for extracting features from non-stationary signals.
In order to determine the ERG parameters, Varadharajan [25] compared the results of the time domain analysis and the continuous wavelet transform. It was confirmed that the continuous wavelet transform could distinguish between normal and abnormal ERGs, particularly in the early diagnosis of glaucoma, and that it was more accurate than the time domain analysis methods.
A research group led by Penkala [26] focused on extracting information about the time–frequency characteristics of the ERG a- and b-waves using wavelet analysis. According to their results, low-frequency components dominated (both in a-waves and b-waves), and their temporal distribution was affected by the brightness of the light pulses.
Barraco [27] used wavelet analysis to extract the characteristics of the ERG a-wave. The work was focused on diagnosing two pathologies: achromatopsia and congenital stationary night blindness. The results showed that the number of dominant frequencies and the times of their occurrences in both of the studied diseases could reflect the state of the retinal photoreceptors. A comparison of individual pathological cases with a healthy control group revealed that both the components of the incidence of the disease were shifted toward the lower frequencies. Other studies [28,29,30,31,32] also point toward the use of wavelet analysis. It should be noted that wavelet analysis is used directly in the above studies. In other words, there is no extraction of parameters from the wavelet scalograms.

7. Conclusions

The results of the study show that the proposed algorithm implements more accurately the classification of adult electroretinogram signals by 19% and pediatric signals by 20%, compared to the classical algorithm. The promising use of ERG is presented by differential diagnostics, which may also be used in preclinical toxicology and experimental modeling. The problem of developing methods for electrophysiological signals analysis in ophthalmology is associated with the complex morphological structure of the electrophysiological signals components. This is due to the generation of retina cell electrical responses to light stimuli. A classical algorithm might not have high efficiency metrics, since electroretinography often occurs in conjunction with other diagnostic methods in outpatient practice, which allows for an assessment of the retina’s functional status. The next stage of the current work will assume the use of the exceed database, as well as an investigation of extra modern methods of feature extraction and classification. For instance, we are going to test such time-series classification techniques as shapelets, elastic measurements, or dictionary-based ones.

Author Contributions

Conceptualization, A.Z. and D.Z.; methodology, A.Z., A.D. and D.Z.; software, A.Z. and A.D.; validation, V.B., A.Z. and D.Z.; formal analysis, M.R.; investigation, A.D.; writing—original draft preparation, A.Z.; writing—review and editing, D.Z. and M.R.; visualization, A.Z. and A.D.; supervision, D.Z.; project administration, A.Z.; funding acquisition, M.R. All authors have read and agreed to the published version of the manuscript.

Funding

The research funding from the Ministry of Science and Higher Education of the Russian Federation (Ural Federal University Program of Development within the Priority—2030 Program) is gratefully acknowledged.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Zhdanov, A.E.; Dolganov, A.Y.; Borisov, V.I.; Lucian, E.; Bao, X.; Kazaijkin, V.N.; Ponomarev, V.O.; Lizunov, A.V.; Ivliev, S.A. 355 OculusGraphy: Pediatric and Adults Electroretinograms Database, 2020. https://0-doi-org.brum.beds.ac.uk/10.21227/y0fh-5v04 (accessed on 29 November 2022).

Acknowledgments

Aleksei Zhdanov executed the design, the definition of intellectual content, data analysis, and manuscript preparation and editing within the Bi-nationally Supervised Doctoral Degrees/Cotutelle DAAD Research Grant.

Conflicts of Interest

The authors declare that they have no known competing financial interest or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Das, H.; Naik, B.; Behera, H.; Jaiswal, S.; Mahato, P.; Rout, M. Biomedical data analysis using neuro-fuzzy model with post-feature reduction. J. King Saud Univ.-Comput. Inf. Sci. 2020, 34, 2540–2550. [Google Scholar] [CrossRef]
  2. Henstridge, C.M.; Hyman, B.T.; Spires-Jones, T.L. Beyond the neuron–cellular interactions early in Alzheimer disease pathogenesis. Nat. Rev. Neurosci. 2019, 20, 94–108. [Google Scholar] [CrossRef] [PubMed]
  3. Jamison, J.; Bush, R.; Lei, B.; Sieving, P. Characterization of the rod photoresponse isolated from the dark-adapted primate ERG. Vis. Neurosci. 2001, 18, 445–455. [Google Scholar] [CrossRef] [PubMed]
  4. Adrian, E. The electric response of the human eye. J. Physiol. 1945, 104, 84. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Granit, R. Two types of retinae and their electrical responses to intermittent stimuli in light and dark adaptation. J. Physiol. 1935, 85, 421. [Google Scholar] [CrossRef] [Green Version]
  6. Creel, D.J. The Electroretinogram and Electro-Oculogram: Clinical Applications by Donnell J. Creel. Webvision: The Organization of the Retina and Visual System. 2015. Available online: https://webvision.med.utah.edu/book/electrophysiology/the-electroretinogram-clinical-applications/ (accessed on 29 November 2022).
  7. Hamilton, R.; Graham, K. Effect of shorter dark adaptation on ISCEV standard DA 0.01 and DA 3 skin ERGs in healthy adults. Doc. Ophthalmol. 2016, 133, 11–19. [Google Scholar] [CrossRef]
  8. Tang, J.; Hui, F.; Coote, M.; Crowston, J.G.; Hadoux, X. Baseline detrending for the photopic negative response. Transl. Vis. Sci. Technol. 2018, 7, 9. [Google Scholar] [CrossRef] [Green Version]
  9. Bach, M.; Meroni, C.; Heinrich, S.P. ERG shrinks by 10% when reducing dark adaptation time to 10 min, but only for weak flashes. Doc. Ophthalmol. 2020, 141, 57–64. [Google Scholar] [CrossRef] [Green Version]
  10. McCulloch, D.L.; Marmor, M.F.; Brigell, M.G.; Hamilton, R.; Holder, G.E.; Tzekov, R.; Bach, M. ISCEV Standard for full-field clinical electroretinography (2015 update). Doc. Ophthalmol. 2015, 130, 1–12. [Google Scholar] [CrossRef] [Green Version]
  11. Lyons, J.S.; Severns, M.L. Using multifocal ERG ring ratios to detect and follow Plaquenil retinal toxicity: A review. Doc. Ophthalmol. 2009, 118, 29–36. [Google Scholar] [CrossRef]
  12. Robson, A.G.; Webster, A.R.; Michaelides, M.; Downes, S.M.; Cowing, J.A.; Hunt, D.M.; Moore, A.T.; Holder, G.E. “Cone dystrophy with supernormal rod electroretinogram”: A comprehensive genotype/phenotype study including fundus autofluorescence and extensive electrophysiology. Retina 2010, 30, 51–62. [Google Scholar] [CrossRef] [PubMed]
  13. Johnson, M.A.; Jeffrey, B.G.; Messias, A.; Robson, A.G. ISCEV extended protocol for the stimulus–response series for the dark-adapted full-field ERG b-wave. Doc. Ophthalmol. 2019, 138, 217–227. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Robson, A.G.; Frishman, L.J.; Grigg, J.; Hamilton, R.; Jeffrey, B.G.; Kondo, M.; Li, S.; McCulloch, D.L. ISCEV Standard for full-field clinical electroretinography (2022 update). Doc. Ophthalmol. 2022, 144, 165–177. [Google Scholar] [CrossRef] [PubMed]
  15. Behbahani, S.; Ahmadieh, H.; Rajan, S. Feature Extraction Methods for Electroretinogram Signal Analysis: A Review. IEEE Access 2021, 9, 116879–116897. [Google Scholar] [CrossRef]
  16. Moskowitz, A.; Hansen, R.M.; Fulton, A.B. ERG oscillatory potentials in infants. Doc. Ophthalmol. 2005, 110, 265–270. [Google Scholar] [CrossRef]
  17. Li, X.X.; Yuan, N. Measurement of the oscillatory potentials of the electroretinogram in the domains of frequency and time. Doc. Ophthalmol. 1990, 76, 65–71. [Google Scholar] [CrossRef]
  18. Wan, W.; Chen, Z.; Lei, B. Increase in electroretinogram rod-driven peak frequency of oscillatory potentials and dark-adapted responses in a cohort of myopia patients. Doc. Ophthalmol. 2020, 140, 189–199. [Google Scholar] [CrossRef]
  19. Nair, S.S.; Joseph, K.P. Wavelet based electroretinographic signal analysis for diagnosis. Biomed. Signal Process. Control. 2014, 9, 37–44. [Google Scholar] [CrossRef]
  20. Zhdanov, A.E.; Borisov, V.I.; Dolganov, A.Y.; Lucian, E.; Bao, X.; Kazaijkin, V.N. OculusGraphy: Norms for electroretinogram signals. In Proceedings of the 2021 IEEE 22nd International Conference of Young Professionals in Electron Devices and Materials (EDM), Souzga, Russia, 30 June–4 July 2021; pp. 399–402. [Google Scholar]
  21. Zhdanov, A.E.; Dolganov, A.Y.; Borisov, V.I.; Lucian, E.; Bao, X.; Kazaijkin, V.N.; Ponomarev, V.O.; Lizunov, A.V.; Ivliev, S.A. OculusGraphy: Pediatric and Adults Electroretinograms Database. IEEE Dataport 2020. [Google Scholar] [CrossRef]
  22. Lee, G.; Gommers, R.; Waselewski, F.; Wohlfahrt, K.; O’Leary, A. PyWavelets: A Python package for wavelet analysis. J. Open Source Softw. 2019, 4, 1237. [Google Scholar] [CrossRef]
  23. Itseez. Open Source Computer Vision Library. 2015. Available online: https://github.com/itseez/opencv (accessed on 29 November 2022).
  24. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  25. Varadharajan, S.; Fitzgerald, K.; Lakshminarayanan, V. A novel method for separating the components of the clinical electroretinogram. J. Mod. Opt. 2007, 54, 1263–1280. [Google Scholar] [CrossRef]
  26. Penkala, K.; Jaskuła, M.; Lubiński, W. Improvement of the PERG parameters measurement accuracy in the continuous wavelet transform coefficients domain. Ann. Acad. Medicae Stetin. 2007, 53, 58–60. [Google Scholar]
  27. Barraco, R.; Adorno, D.P.; Brai, M. Wavelet analysis of human photoreceptoral response. In Proceedings of the 2010 3rd International Symposium on Applied Sciences in Biomedical and Communication Technologies (ISABEL 2010), Roma, Italy, 7–10 November 2010; pp. 1–4. [Google Scholar]
  28. Jiménez, J.M.; Velasco, R.B.; Vázquez, L.B.; Ascariz, J.R.; De la Villa Polo, P. Multifocal electroretinography, glaucoma diagnosis by means of the wavelet transform. In Proceedings of the 2008 Canadian Conference on Electrical and Computer Engineering, Niagara Falls, ON, Canada, 4–7 May 2008; pp. 867–870. [Google Scholar]
  29. Miguel-Jiménez, J.; Boquete, L.; Ortega, S.; Rodriguez-Ascariz, J.; Blanco, R. Glaucoma detection by wavelet-based analysis of the global flash multifocal electroretinogram. Med. Eng. Phys. 2010, 32, 617–622. [Google Scholar] [CrossRef]
  30. Miguel-Jiménez, J.M.; Ortega, S.; Boquete, L.; Rodríguez-Ascariz, J.M.; Blanco, R. Multifocal ERG wavelet packet decomposition applied to glaucoma diagnosis. Biomed. Eng. Online 2011, 10, 37. [Google Scholar] [CrossRef] [Green Version]
  31. Barraco, R.; Adorno, D.P.; Brai, M. An approach based on wavelet analysis for feature extraction in the a-wave of the electroretinogram. Comput. Methods Programs Biomed. 2011, 104, 316–324. [Google Scholar] [CrossRef]
  32. Barraco, R.; Persano Adorno, D.; Brai, M. ERG signal analysis using wavelet transform. Theory Biosci. 2011, 130, 155–163. [Google Scholar] [CrossRef]
Figure 1. Amplitude–time (a) and frequency–time (b) representations of the electroretinogram signal.
Figure 1. Amplitude–time (a) and frequency–time (b) representations of the electroretinogram signal.
Applsci 12 12365 g001
Figure 2. Illustration of the steps for wavelet scalogram connected components’ determination: (a) Wavelet scalogram; (b) Binarized scalogram; (c) Filtered binarized scalogram; (d) Components extraction.
Figure 2. Illustration of the steps for wavelet scalogram connected components’ determination: (a) Wavelet scalogram; (b) Binarized scalogram; (c) Filtered binarized scalogram; (d) Components extraction.
Applsci 12 12365 g002aApplsci 12 12365 g002b
Figure 3. Parameters extracted from the wavelet scalogram of electroretinogram signal: (a) maximum brightness of the wavelet scalogram segment; (b) frequency and time of the maximum region of the wavelet scalogram segment; (c) median and mean brightness values of wavelet scalogram segment; (d) frequency and time extremes of the wavelet scalogram segment.
Figure 3. Parameters extracted from the wavelet scalogram of electroretinogram signal: (a) maximum brightness of the wavelet scalogram segment; (b) frequency and time of the maximum region of the wavelet scalogram segment; (c) median and mean brightness values of wavelet scalogram segment; (d) frequency and time extremes of the wavelet scalogram segment.
Applsci 12 12365 g003
Figure 4. Illustration of the generalized pipeline of the data processing.
Figure 4. Illustration of the generalized pipeline of the data processing.
Applsci 12 12365 g004
Figure 5. Boxplots of amplitudes and latencies for a- and b-waves: (a) pediatric group; (b) adult group.
Figure 5. Boxplots of amplitudes and latencies for a- and b-waves: (a) pediatric group; (b) adult group.
Applsci 12 12365 g005
Table 1. Parameters of electroretinogram signal.
Table 1. Parameters of electroretinogram signal.
DesignationNameDescription
B m a x maximum brightness of the wavelet scalogram segmentestimation of the segment amplitude in the selected frequency and time domains over the entire area (Figure 3a)
f , t frequency and time of the maximum region of the wavelet scalogram segmentestimation of the frequency–time coordinates of the maximum area of the analyzed segment associated with the prevalence of the specific cells or cellular structures contribution (Figure 3b)
A m e d i a n median brightness values of wavelet scalogram segmentevaluation of brightness distribution over the entire area of the analyzed segment (Figure 3c)
A m e a n mean brightness values of wavelet scalogram segmentassessment of segment displacement and uniformity of brightness distribution over the entire area of the analyzed segment (Figure 3c)
t 1 , t 2 , f 1 , f 2 frequency and time extremes of the wavelet scalogram segmentevaluation of the spatial location in the analyzed segment on the wavelet scalogram (Figure 3d)
t 1 90 , t 2 90 , f 1 90 , f 2 90 frequency and time extremes of the wavelet scalogram 90% segmentevaluation of the spatial location of the analyzed segment, where amplitude is higher than 90% of maximum value
Table 2. Values of amplitudes and latencies for the a- and b-waves for the pediatric group.
Table 2. Values of amplitudes and latencies for the a- and b-waves for the pediatric group.
DiagnosisMedian ± STDQ5%Q95%
a
healthy
pathology
44.42 ± 16.06
36.47 ± 17.06
17.1
6.46
60.51
59.83
b
healthy
pathology
70.37 ± 13.97
60.13 ± 27.91
47.81
18.94
89.03
109.57
l a
healthy
pathology
21.00 ± 7.71
18.50 ± 8.32
9.38
8.35
32.38
33.00
l b
healthy
pathology
47.50 ± 9.02
47.00 ± 7.01
26.38
35.80
59.25
59.00
Table 3. Values of amplitudes and latencies for the a- and b-waves for the adult group.
Table 3. Values of amplitudes and latencies for the a- and b-waves for the adult group.
DiagnosisMedian ± STDQ5%Q95%
a
healthy
pathology
34.51 ± 13.30
32.99 ± 15.33
17.53
5.19
50.38
54.39
b
healthy
pathology
68.73 ± 19.47
48.44 ± 24.22
37.52
17.70
88.52
97.95
l a
healthy
pathology
47.50 ± 9.02
47.00 ± 7.01
26.38
35.80
59.25
59.00
l b
healthy
pathology
47.00 ± 9.16
51.50 ± 7.20
39.25
39.15
65.25
62.20
Table 4. Comparison of the classical and wavelet feature sets for pediatric group classification.
Table 4. Comparison of the classical and wavelet feature sets for pediatric group classification.
Features SetMetricAccuracyF1PrecisionRecall
CFMean
STD
0.52
0.06
0.52
0.18
0.72
0.19
0.52
0.32
WA1–4Mean
STD
0.70
0.18
0.74
0.15
0.78
0.17
0.76
0.24
WA1–4Mean
STD
0.60
0.06
0.64
0.05
0.70
0.08
0.61
0.12
CF+WA1–4Mean
STD
0.58
0.10
0.65
0.07
0.66
0.07
0.66
0.10
Table 5. Comparison of the classical and wavelet feature sets for adults group classification.
Table 5. Comparison of the classical and wavelet feature sets for adults group classification.
Features SetMetricAccuracyF1PrecisionRecall
CFMean
STD
0.54
0.24
0.62
0.35
0.55
0.31
0.73
0.43
WA1–4Mean
STD
0.78
0.23
0.84
0.17
0.86
0.19
0.84
0.16
WA1–4Mean
STD
0.81
0.20
0.86
0.14
0.87
0.17
0.88
0.17
CF+WA1–4Mean
STD
0.83
0.10
0.88
0.07
0.89
0.07
0.88
0.10
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhdanov, A.; Dolganov, A.; Zanca, D.; Borisov, V.; Ronkin, M. Advanced Analysis of Electroretinograms Based on Wavelet Scalogram Processing. Appl. Sci. 2022, 12, 12365. https://0-doi-org.brum.beds.ac.uk/10.3390/app122312365

AMA Style

Zhdanov A, Dolganov A, Zanca D, Borisov V, Ronkin M. Advanced Analysis of Electroretinograms Based on Wavelet Scalogram Processing. Applied Sciences. 2022; 12(23):12365. https://0-doi-org.brum.beds.ac.uk/10.3390/app122312365

Chicago/Turabian Style

Zhdanov, Aleksei, Anton Dolganov, Dario Zanca, Vasilii Borisov, and Mikhail Ronkin. 2022. "Advanced Analysis of Electroretinograms Based on Wavelet Scalogram Processing" Applied Sciences 12, no. 23: 12365. https://0-doi-org.brum.beds.ac.uk/10.3390/app122312365

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop