Next Article in Journal
A Hybrid Method for 3D Reconstruction of MR Images
Next Article in Special Issue
Time Synchronization of Multimodal Physiological Signals through Alignment of Common Signal Types and Its Technical Considerations in Digital Health
Previous Article in Journal
Machine-Learning-Based Real-Time Multi-Camera Vehicle Tracking and Travel-Time Estimation
Previous Article in Special Issue
Generative Adversarial Networks in Brain Imaging: A Narrative Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Hypertrophic Cardiomyopathy Diagnosis Index Using Deep Features and Local Directional Pattern Techniques

1
Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
2
Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal 576104, India
3
Department of Medicine, Division of Cardiology, Columbia University Medical Center, New York, NY 10032, USA
4
Department of Cardiology, National Heart Centre Singapore, Singapore 169609, Singapore
5
Duke-NUS Medical School, Singapore 169857, Singapore
6
Department of Electronics and Telecommunications, Politecnico di Torino, 10129 Torino, Italy
7
School of Engineering, Ngee Ann Polytechnic, Clementi, Singapore 599489, Singapore
8
Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
9
International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto 8608555, Japan
10
Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore 599494, Singapore
*
Author to whom correspondence should be addressed.
Submission received: 27 January 2022 / Revised: 22 March 2022 / Accepted: 28 March 2022 / Published: 6 April 2022

Abstract

:
Hypertrophic cardiomyopathy (HCM) is a genetic disorder that exhibits a wide spectrum of clinical presentations, including sudden death. Early diagnosis and intervention may avert the latter. Left ventricular hypertrophy on heart imaging is an important diagnostic criterion for HCM, and the most common imaging modality is heart ultrasound (US). The US is operator-dependent, and its interpretation is subject to human error and variability. We proposed an automated computer-aided diagnostic tool to discriminate HCM from healthy subjects on US images. We used a local directional pattern and the ResNet-50 pretrained network to classify heart US images acquired from 62 known HCM patients and 101 healthy subjects. Deep features were ranked using Student’s t-test, and the most significant feature (SigFea) was identified. An integrated index derived from the simulation was defined as 100 · l o g 10 ( S i g F e a / 2 )   in each subject, and a diagnostic threshold value was empirically calculated as the mean of the minimum and maximum integrated indices among HCM and healthy subjects, respectively. An integrated index above a threshold of 0.5 separated HCM from healthy subjects with 100% accuracy in our test dataset.

1. Introduction

Hypertrophic cardiomyopathy (HCM) is an autosomal dominant genetic disorder caused by mutation of one of several genes coding for various proteins of the cardiac sarcomere. Morphologically, HCM is characterized by a hypertrophied, nondilated left ventricle with or without right ventricular involvement in the absence of another cardiac or systemic disease [1,2]. The prevalence of HCM is approximately 1 in 500 persons in the general population [3]. Altered sarcomeric proteoforms have been identified in surgical samples of HCM patients [4], and the hypertrophied wall is composed of disarrayed myocardial fibers with interstitial fibrosis, which results in reduced ventricular compliance [5] (see Figure 1). HCM may exhibit clinical vacillation from asymptomatic to sudden cardiac death in young adults [6,7,8].
On the cardiac ultrasound (US), the classical form of HCM is typified by asymmetrical septal hypertrophy, which is defined as an interventricular septal (IVS) thickness of at least 15 mm and increased ratio of the IVS to left ventricular posterior wall thicknesses > 1.3 in the absence of any valve or systemic disease [9]. In other forms, hypertrophy may involve different myocardial segments. HCM can be classified into reverse curvature septum, sigmoid septum, neutral, apical HCM, and midventricular hypertrophy subtypes [10]. Pathophysiologically, this results in both systolic and diastolic left ventricular dysfunction [11]. This study aimed to develop an automated detection method to discriminate HCM versus healthy controls on the cardiac US.
Computer-aided diagnostic (CAD) tools are increasingly being used to reduce the time-cost of diagnosis. In [12], a CAD tool for diagnosing congestive heart failure (CHF) on electrocardiogram (ECG) signals was described. Other researchers used CAD tools to analyze US images. The use of CAD tools to categorize infarcted myocardium versus normal on heart US images [13] and artificial intelligence to analyze cardiovascular US images in general [14] have been reviewed. In [15], the authors extracted textural features and employed particle swarm optimization, attaining a maximum accuracy of 99.33% for diagnosing CHF. The same group used the double-density dual-tree discrete wavelet transform (DD-DTDWT) to identify coronary artery disease, achieving an accuracy of 96.05% [16]. They extended their work to screening pulmonary hypertension using entropy features and attained a classification accuracy of 92% [17]. Notably, the same group has developed a CAD tool to recognize the four-chamber heart US images of the fetuses of pregnant women with pregestational diabetes mellitus or gestational diabetes mellitus using the local preserving class separation technique [18].
Very few works have been published on the automated characterization of HCM or left ventricular hypertrophy using US images [19,20,21,22]. In [19], dilated cardiomyopathy and HCM were diagnosed from heart US parasternal short-axis views. The left ventricle was segmented using fuzzy c-means clustering, and features were extracted using principal component analysis and discrete cosine transform, which were then fed to various classifiers. An overall accuracy of 92.04% was achieved for classifying normal versus abnormal hearts using principal component analysis (PCA) features with the backpropagation neural network (BPNN) classifier. In [20], Darwinian particle swarm optimization (DPSO) and fuzzy c-means (FCM) clustering were used for segmenting the left ventricle on the parasternal short-axis view. For the extracted gray level co-occurrence matrix (GLCM) and discrete cosine transform (DCT) features, 90% accuracy was achieved using a support vector machine (SVM) classifier. In [21], a multilayer convolutional neural network (CNN) model was trained to detect HCM using the apical four-chamber view, which achieved high discriminant utility with a C statistic value of 0.93. In [22], texture-based analysis was utilized to characterize HCM by first-order statistics, and the GLCM, along with the features, were fed to an SVM classifier.
The current work developed CAD tools for assessing HCM using four-chamber heart US images. The contributions of the paper are as follows:
  • Established databanks of four-chamber US images of normal and HCM subjects;
  • Created deep features by combining local texture featured images with deep neural networks; and
  • Generated an integrated index to categorize normal versus HCM using a distinctive number.
The remainder of the paper is organized as follows: Section 2 describes the materials used and the US image acquisition. Our analysis methodology is outlined in Section 3. Experimental results and discussion of the results are presented in Section 4 and Section 5, respectively. Finally, the concluding remarks of the paper are given in Section 6.

2. Materials

A total of 62 (mean age 50.7 ± 14.3 years) patients diagnosed with HCM who visited the cardiology outpatient department at a single center were prospectively recruited, and 101 age-matched healthy individuals (mean age 52.4 ± 15.5 years) attending the same center for routine health checks were recruited as controls. The institutional ethics committee approved the study at Kasturba Hospital Manipal (IEC NO.: 48/2020), and informed consent was obtained from the participants. HCM was diagnosed if the echocardiographic examinations showed a nondilated, hypertrophic left ventricle (LV) without any known cause; i.e., long-term hypertension or another cardiac/systemic disease, and the ratio of the thickness of IVS and posterior wall thickness (PW) was >1.3 with or without left ventricular outflow tract obstruction (LVOTO), or patients diagnosed with apical HCM. Among HCM patients, 42 (67.74%) presented with symptoms such as chest pain, dyspnea on exertion, and syncope, and 20 (32.25%) were incidentally diagnosed with HCM. Subjects with hypertension, renal failure requiring medical intervention, left ventricular ejection fraction < 55%, known ischemic heart disease, congenital heart disease, and valvular heart disease of more than mild severity were excluded from the study. All participants underwent heart US examination on a Vivid S60 system (GE Healthcare) with a 3Sc-RS phased array transducer probe and a frequency range of 1.3 to 4.5 MHz. Standard parasternal short-axis view at the mid-left ventricular (papillary muscle) level and apical 2- and 4-chamber views were acquired and archived digitally. In each participant, one static image at one time frame corresponding to the R wave on ECG from the cine 4-chamber view was selected for analysis using the CAD. In total, 62 and 101 images of HCM and normal subjects, respectively, were analyzed. Examples of typical images used are provided in Figure 2.

3. Methodology

Deep neural networks have achieved excellent performance for pattern recognition [23,24,25,26]. Such a model is typically trained on a large dataset in one domain, and the knowledge gained is then transferred to another domain comprising a smaller dataset [27]. In the current study, we exploited local descriptors such as the local directional pattern (LDP) [28] and a pretrained ResNet-50 (RNet50) [29] network to generate deep features. Figure 3 shows the various stages of the proposed system, which include feature generation, feature selection, and classification. A detailed description of each stage is provided in subsequent sections.

3.1. Preprocessing

Unwanted information such as labels, signals, etc., was first removed from the apical 4-chamber heart US image. A mask was then generated to extract the region of interest to increase system efficacy, and a median filter of size 5 × 5 was applied to it to reduce the noise level. The filtered image was then resized using the bicubic interpolation technique for further processing [30]. Figure 4 shows the created mask and preprocessed image.

3.2. Feature Generation

This stage generated the features used to characterize the 4-chamber heart US image.

3.2.1. Local Directional Pattern (LDP)

The LDP is a descriptor that uses Kirsch compass kernels to extract the directional component [28], and is an improvement on the traditional local binary pattern. For a given central pixel i c of an image, with 3 × 3 neighborhood pixels of intensity values i n   ,   n = 0 , 1 , 7 , Kirsch edge detectors of 3 × 3 with eight possible orientations centered at ( x c ,   y c ) were considered to obtain responses M n ,   n = 0 , 1 , ,   7 corresponding to pixel value i n . Based on the k t h Kirsch activation (i.e., M k ), all neighboring pixels with higher Kirsch responses were set to 1, while the rest were set to a null value, as we were only interested in generating the LDP pattern with the most evident directions. Then, the LDP value of ( x c ,   y c ) with various directional responses was given by:
L D P = n = 0 7 b ( M n M k ) · 2 n
b ( x ) = { 1 ;   x 0   0 ;   o t h e r w i s e
The complete process to compute the LDP is illustrated in Figure 5.
The generated LDP patterns were more stable in the presence of noise and changes in image brightness. Figure 6 shows the preprocessed 4-chamber heart US images and the corresponding LDP images.

3.2.2. Deep-Learning Model

RNet50 is a deep-learning neural network for image classification that has been pre-trained using a subset of images from the ImageNet database [31]. The network is based on residual learning, and comprises 50 layers. Deep networks facilitate the extraction of significant features for efficient classification [32]. RNet50 is a deeper network than other CNN architectures, yet possesses fewer parameters [33]. The stacking of convolutional layers usually demonstrates good performance initially, but later declines due to gradient vanishing [33,34]. RNet50 circumvents the degradation issue by incorporating a deep residual learning framework with identity mapping [33,34,35,36]. The latter allows the CNN model to bypass the present weight layer if not required; therefore, inputs from the present convolutional layer can be copied to the next one without any alteration. The residual block is fundamental to the RNet framework. RNet50 contains 16 such residual blocks, on which it capitalizes to accelerate the network’s training, reduce training errors, and preserve the accuracy [37]. The residual block for the considered stack of three layers was defined as:
y = F ( x , { W i } ) + x  
where F is the residual function that needed to be learned, x is the input vector, and y is the output vector that could be obtained by performing element-wise addition and skip connection.
The bottleneck residual block for RNet50 consisted of three convolutional layers: the initial and final 1 × 1 layers reduced and restored the dimensions, respectively; while the middle 3 × 3 layer dealt with the reduced dimensions [29,38]. This bottleneck architecture greatly reduced the computational complexity and the number of parameters. The inputs to RNet50 were RGB images with dimensions of 224 × 224, and the output dimensions were reduced to 112 × 112, 56 × 56, 28 × 28, 14 × 14, 7 × 7, and 1 × 1 after passing in turn through the five categories of convolutional and average pool layers [29,39]. These RGB images were obtained by triplicating the gray level channel. In general, deep layers learn global features while shallow layers are utilized to capture features such as corners, edges, curves, etc. (i.e., local features). The knowledge of learned features from a general image dataset can be transferred to other domains, which helps save training effort and time [27,36,40,41]. The current study used the pretrained RNet50 with its weights unchanged to extract the local features from heart US images to train the final layers. Here, transfer learning was a nonlinear function that used the source task and domain knowledge to learn the target task in the target domain [23]. The complete structure of the feature extraction stage is illustrated in Figure 7. Note that the LDP (not the raw) image of the heart US image was input to the pretrained RNet50 to obtain a feature size of 2048. The RNet50 architecture had been pretrained in the ImageNet database [31] and modified to classify the images into two classes: normal and HCM. The weights were updated using the stochastic gradient descent with the momentum algorithm on a GTX 1070 GPU, with an initial learning rate of 0.0001 and a learn rate drop factor of 0.01. This was done for a mini-batch size of 32, and the sequence was shuffled every epoch and validated at every 10th epoch.

3.3. Feature Selection

A Student’s t-test was used for feature selection. This test measures the statistical difference between two sets by computing the ratio of the difference of the means between and the variability of the two classes [42]. The null hypothesis is that the means of two groups are equal, and the hypothesis can be rejected based on the computed t-value:
t = μ x μ y ( σ x 2 S x ) + ( σ y 2 S y )  
where μ x and μ y and σ x and σ y are the means and standard deviations of the two groups, respectively, and S x and S y denote the total numbers of samples for the two groups [43]. The t-values and the corresponding p-values were calculated for the features of both classes. The t-value estimated the difference between the two groups, and a significant p-value implied that the difference had not occurred by chance. Features were ranked by selecting for higher t-values with lower p-values [44].

3.4. Classification

In this stage, the classes were predicted using the supervised and unsupervised methodology. Most medical applications use supervised classification techniques, such as probabilistic neural networks [45,46], SVM [47,48], k-nearest neighbor [49], etc., to predict the class label of target data. However, the parameter settings required by some of these classifiers may result in overfitting. To overcome this, many researchers have employed indexing to reflect the inference of generated features, whereby a distinct number is formulated to distinguish abnormal from normal images [50,51,52,53,54,55,56,57,58,59]. Accordingly, we formulated an integrated index for HCM (IIHCM) based on the significant features. The equations were empirically derived from simulation, and are given as:
I I H C M = l o g 10 ( S i g F e a / 2 ) × 100
and
T h = { min ( I I H C M h c m ) + max ( I I H C M n o r m a l ) } 2
where S i g F e a is the most significant feature selected based on its highest t-value.
Dividing the feature by 2 would clutter the data with the same class, but the l o g 10 term would separate the classes to a great extent. The threshold value (Th) was calculated as the mean of the minimum and maximum integrated indices among HCM and healthy subjects, respectively, to obtain the definite boundary between the HCM and normal heart images; i.e., attaining discrimination between normal and HCM classes using a unique number.

4. Experimental Results

All images were preprocessed to obtain only the region of interest containing the image of the heart and resized to 224 × 224 for input to the RNet50. For each image, a feature of size 2048 was generated (i.e., LDPRes). The proposed algorithm was executed using a system with the following specifications: Intel i7 7700 k quad-core processor @ 4.7 GHz, 8 GB 2400 MHz single-channel memory with NVIDIA GTX 1070 GPU 8 GB VRAM, under the MATLAB platform. Obtained features with p-values < 0.005 were selected and then ranked in descending order of t-values. The selected ranked features are shown in Table 1.
The highest-ranked feature, LDPRes870, was highly significant, and its distribution is shown in Figure 8a,b. It can be seen that both normal and HCM data were distributed toward the positive side. Hence, highly significant features of LDPRes870 were used in Equation (5) for IIHCM. It was noted from Figure 8c that HCM and normal data were distributed toward the positive and negative sides, respectively. From Equation (6), the threshold value was calculated as Th ≈ 0.5, a single distinct number that separated the HCM versus normal cases as shown in the box plots of indexed data in Figure 8d. An integrated index above a threshold of 0.5 separated HCM from healthy subjects with 100% accuracy in our test dataset.

Comparative Study

We compared the proposed method to four deep-learning techniques: ResNet-18 (RNet18) [29], AlexNet (ANet) [60], DarkNet (DNet) [61], and GoogLeNet (GNet) [62]. In the experiments, pretrained networks were used, as the small size of the dataset made it difficult to fine-tune the parameters. Feature extraction was performed by activating layers of the pretrained network as features. The details of each network with its generated feature size for LDP images are given in Table 2.
Every preprocessed image was applied with the LDP to obtain the local texture feature and then input to the aforementioned deep-learning architecture. Image feature sizes of 2048, 512, 4096, 1000, and 1024 were obtained using RNet50, RNet18, ANet, DNet, and GNet, respectively. These features were further ranked using Student’s t-test. Ranked features obtained from various methods were classified using the SVM classifier with a polynomial kernel, and various performance measures such as accuracy (Acc.), sensitivity (Sen.), specificity (Spe.), positive predictive value (PPV) were computed [63]. It was observed that LDP-RNet50 achieved a remarkable performance using only three features. Table 3 shows the performance of various methods using three features for a randomly partitioned training (70%) and test (30%) dataset. Though LDP-RNet50 with SVM achieved 100% accuracy, on the other hand, the proposed IIHCM categorized the HCM and normal using single integer values with appropriate positive and negative ranges.
We proposed to use LDP derived from US images for training in preference over raw US images because we believed LDP would be more stable in the presence of noise and changes in image brightness, which are common quality issues with US. As shown above, the LDP-RNet50 combination achieved the best performance in our experiments. To assess the contribution of LDP, we conducted additional experiments using RNet50, as well as various customized CNNs—CNN-1 and CNN-2, with 12 and 16 layers, respectively [18]. With each learning method, the US image dataset was randomly partitioned into 70% training and 30% testing data, and the system was tested 10 times. The average accuracy rates were 89.65%, 92.51%, and 93.26% for CNN-1, CNN-2, and RNet50, respectively, which were good, but still lower than the LDP-RNet50 approach.

5. Discussion

In this study, the categorization of normal versus HCM was conducted on 163 heart US images. US image quality is often degraded by noise, and image brightness may affect interpretation, which scan settings may arbitrarily alter. We used LDP to obtain reproducible structural patterns. It encoded the texture by employing different directional responses. In the presence of noise, the relative perspective of edges may change. In such situations, LDP produces more stable patterns. LDP-based images were input to the RNet50 network to obtain deep features. This process yielded good separation results (see Figure 8). The use of a pretrained model allowed us to extract the features from our dataset with a fixed mechanism using pretrained weights [40], which was faster than training the model with random weights [64]. The generated deep features were clinically significant, with p-values < 0.005. The highest-ranked features from all methods were considered, and their distributions are shown in Table 4 and Figure 9, respectively. In contrast to the LDP-RNet50 feature, which exhibited good separation, all features from the other deep-learning approaches demonstrated overlapping between the HCM and normal groups (see Figure 9).
Using Equations (5) and (6), a single number was formulated for each feature that could distinguish HCM from normal subjects. IIHCM was also applied to other deep features (Table 4), but the results from RNet18, ANet, and GNet did not surpass those derived from the RNet50 network (Figure 10), and could not discriminate as well between healthy versus HCM subjects. The box plot for DNet is not shown due to its negative values.
Our best results and integrated index were obtained by training the entire LDP dataset using the pretrained RNet50 model without fine-tuning. Then, as a sensitivity analysis, we divided the LDP dataset into 70% training and 30% testing sets (which would have been a more conventional approach had the learning method not been pretrained), and repeated the experiment. The integrated index thus derived using identical methodology demonstrated excellent separation between normal and HCM (Figure 11), which indicated that our findings were robust.
We synthetically generated minority class samples using an adaptive synthetic (ADASYN) sampling approach [65]. ADASYN uses the weighted distribution of minority data and reduces the bias that is introduced due to an imbalanced dataset. As a result, we obtained 198 samples (i.e., normal = 101 samples and HCM = 97 samples) after ADASYN. Further, the obtained LDP-RNet50 features of all the samples were analyzed using the t-distributed stochastic neighbor embedding (t-SNE) technique, which helped to visualize data by reducing its dimensions [66]. Figure 12 shows the data visualization of ranked LDP-RNet50 features using t-SNE.
Further, we performed classification using the k-fold cross-validation technique. The SVM classifier obtained an accuracy of 100%, a sensitivity of 100%, and a specificity of 100% with 10-fold cross-validation (the obtained results were identical when k = 5 and 7). In addition, it was noted that the proposed system achieved an area under the curve (AUC) of 1.00 (Figure 13).
In addition, we used various criteria such as entropy, Bhattacharyya, ROC, and Wilcoxon to access the significance of the generated features [67,68]. It was observed that only Student’s t-test and the Wilcoxon signed-rank test determined feature LDPRes870 to be the most significant feature, with p < 0.005 (refer to Table 4 and Table 5).
However, using these features, the proposed IIHCM achieved an accuracy of 100%. Table 6 summarizes the state-of-the-art techniques proposed to detect HCM using four-chamber heart US images. To the best of our knowledge, this is the first work to propose the index for HCM classification.
This work successfully categorized normal versus HCM heart US images. The advantages of the proposed system are:
  • An integrated index based on heart US image features was developed that could effectively discriminate for HCM subjects.
  • The use of a single distinct value simplified the classification and should garner early clinical adoption, especially in rural and semiurban areas where access to experienced US operators may be limited.
  • The proposed framework can be generalized to image analysis of other imaging modalities and/or other anatomical regions; e.g., fundus images, brain magnetic resonance imaging, etc.
We have shown that the novel combination of LDP and RNet50 helped to extract the discriminative features automatically without the need for manual input; e.g., measurement of dimensions or quantitation of degree of curvature. The excellent performance appeared to be unique to the combination. It was observed that LDP combined with other learning methods such as ResNet-18, AlexNet, DarkNet, and GoogLeNet did not yield a good separation of the features. Learning of original US images with RNet50 and without LDP processing yielded inferior performance. Our novel LDP-RNet50-based method represents an original contribution toward the automatic classification of HCM versus normal from images, instead of traditional methods requiring at least some expert knowledge and input. In addition, we have developed from the LDP-RNet50 an integrated index that can distinguish HCM from normal based on a diagnostic threshold value that we derived from our dataset. Unlike simple binary classification, an index is a relatable parameter that can inform the doctor of how close to the classification threshold value an individual patient’s analyzed US image would be, which may influence clinical decisions for repeat confirmatory assessment, especially in cases with borderline index values. An index with a threshold value thus carries intuitive appeal for the clinician, and features-derived indices in diverse applications have been reported in the literature [50,51,52,53,54,55,56,57,58,59]. This model was developed using 163 images. It is a prototype developed using images taken from one center (i.e., Kasturba Hospital Manipal, Manipal). Before deploying for clinical use, the developed model needs to be validated with more images collected from other centers, which is a topic for future work. Herein, we have focused on developing a novel index to discriminate HCM patients from normal patients. HCM is generally depicted as a distinct cardiomyopathy. Numerous pathologies can cause left ventricular hypertrophy, such as hypertension, chronic kidney disease, athlete’s heart, etc. The gold standard diagnostic test is still genetic testing. In this work, automated detection of HCM was a revolutionary approach in the noninvasive diagnosis of this rare disease.
The limitations of the proposed system are:
The proposed work categorized sample data into HCM or normal with high accuracy, but did not consider the discernment between HCM and other causes of hypertrophy, which is clinically relevant. It also did not classify the images according to the grade of hypertrophy severity. The method should be independently validated, preferably with larger datasets from multiple centers, before it can be clinically adopted. We plan to address the possible uncertainty issue in our developed model by acquiring more images from various centers in our future studies. The system did not classify the types and extent of hypertrophy patterns among HCM patients. In addition, functional assessment and their role in prediction of complications associated with the HCM were not studied.

6. Conclusions

Hypertrophic cardiomyopathy is a genetic disease of the heart. The generated IIHCM helps to identify the HCM cases with a single threshold value. The proposed indexing entails easy classification that dispenses with the need to manually label the images. The results of the current study are promising and can stimulate new studies using different techniques and more extensive datasets. This approach can help to identify the disease, but when employed for serial monitoring, it can assist in understanding disease longitudinal progression. The limitation of this work is that it was developed using a small dataset. We plan to validate our work with images collected from different centers in the future. We plan to extend the work with more heart US images to characterize various diseases, including ischemic heart disease and other causes of the hypertrophied left ventricle, such as hypertensive heart disease, etc.

Author Contributions

Conceptualization, A.G. and U.R.; Methodology, A.G. and U.R.; Software, A.G., U.R., C.D. and M.R.G.; Validation, J.S. and K.N.; Writing—Review and Editing, A.G., U.R., J.S., E.J.C., R.-S.T., F.M. and U.R.A.; Visualization, U.R.A., A.G. and U.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the Manipal Academy of Higher Education (MAHE) for providing the required facility to carry out this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tsoutsman, T.; Lam, L.; Semsarian, C. Genes, Calcium and Modifying Factors in Hypertrophic CardiomyopathY. Clin. Exp. Pharmacol. Physiol. 2006, 33, 139–145. [Google Scholar] [CrossRef] [PubMed]
  2. Marian, A. Pathogenesis of diverse clinical and pathological phenotypes in hypertrophic cardiomyopathy. Lancet 2000, 355, 58–60. [Google Scholar] [CrossRef]
  3. Elliott, P.; Charron, P.; Blanes, J.R.G.; Tavazzi, L.; Tendera, M.; Konté, M.; Laroche, C.; Maggioni, A.P. European Cardiomyopathy Pilot Registry: EURObservational Research Programme of the European Society of Cardiology. Eur. Heart J. 2015, 37, 164–173. [Google Scholar] [CrossRef]
  4. Tucholski, T.; Cai, W.; Gregorich, Z.R.; Bayne, E.F.; Mitchell, S.D.; McIlwain, S.J.; de Lange, W.J.; Wrobbel, M.; Karp, H.; Hite, Z.; et al. Distinct hypertrophic cardiomyopathy genotypes result in convergent sarcomeric proteoform profiles revealed by top-down proteomics. Proc. Natl. Acad. Sci. USA 2020, 117, 24691–24700. [Google Scholar] [CrossRef]
  5. Rowin, E.J.; Maron, M.S.; Chan, R.H.; Hausvater, A.; Wang, W.; Rastegar, H.; Maron, B.J. Interaction of Adverse Disease Related Pathways in Hypertrophic Cardiomyopathy. Am. J. Cardiol. 2017, 120, 2256–2264. [Google Scholar] [CrossRef]
  6. Maron, B.J.; Thompson, P.D.; Ackerman, M.J.; Balady, G.; Berger, S.; Cohen, D.; Dimeff, R.; Douglas, P.S.; Glover, D.W.; Hutter, A.M.; et al. Recommendations and Considerations Related to Preparticipation Screening for Cardiovascular Abnormalities in Competitive Athletes: 2007 Update. Circulation 2007, 115, 1643–1655. [Google Scholar] [CrossRef] [Green Version]
  7. Elliott, P.M.; Anastasakis, A.; Borger, M.A.; Borggrefe, M.; Cecchi, F.; Charron, P.; Hagege, A.A.; Lafont, A.; Limongelli, G.; Mahrholdt, H.; et al. 2014 ESC Guidelines on diagnosis and management of hypertrophic cardiomyopathy: The Task Force for the Diagnosis and Management of Hypertrophic Cardiomyopathy of the European Society of Cardiology (ESC). Eur. Heart J. 2014, 35, 2733–2779. [Google Scholar] [CrossRef]
  8. O’Mahony, C.; Jichi, F.; Pavlou, M.; Monserrat, L.; Anastasakis, A.; Rapezzi, C.; Biagini, E.; Gimeno, J.R.; Limongelli, G.; McKenna, W.J.; et al. A novel clinical risk prediction model for sudden cardiac death in hypertrophic cardiomyopathy (HCM Risk-SCD). Eur. Heart J. 2013, 35, 2010–2020. [Google Scholar] [CrossRef]
  9. Pantazis, A.; Vischer, A.S.; Perez-Tome, M.C.; Castelletti, S. Diagnosis and management of hypertrophic cardiomyopathy. Echo Res. Pract. 2015, 2, R45–R53. [Google Scholar] [CrossRef] [Green Version]
  10. Antunes, M.D.O.; Scudeler, T.L. Hypertrophic cardiomyopathy. IJC Heart Vasc. 2020, 27, 100503. [Google Scholar] [CrossRef]
  11. Shetty, R. Evaluation of Subtle Left Ventricular Systolic Abnormalities in Adult Patients with Hypertrophic Cardiomyopathy. J. Clin. Diagn. Res. 2014, 8, MC05–MC09. [Google Scholar] [CrossRef] [PubMed]
  12. Jahmunah, V.; Oh, S.L.; Wei, J.K.E.; Ciaccio, E.J.; Chua, K.; San, T.R.; Acharya, U.R. Computer-aided diagnosis of congestive heart failure using ECG signals—A review. Phys. Medica 2019, 62, 95–104. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Sudarshan, V.; Acharya, U.R.; Ng, E.Y.-K.; Meng, C.S.; Tan, R.S.; Ghista, D.N. Automated Identification of Infarcted Myocardium Tissue Characterization Using Ultrasound Images: A Review. IEEE Rev. Biomed. Eng. 2014, 8, 86–97. [Google Scholar] [CrossRef] [PubMed]
  14. Kusunose, K. Radiomics in Echocardiography: Deep Learning and Echocardiographic Analysis. Curr. Cardiol. Rep. 2020, 22, 89. [Google Scholar] [CrossRef]
  15. Raghavendra, U.; Acharya, U.R.; Gudigar, A.; Shetty, R.; Krishnananda, N.; Pai, U.; Samanth, J.; Nayak, C. Automated screening of congestive heart failure using variational mode decomposition and texture features extracted from ultrasound images. Neural Comput. Appl. 2017, 28, 2869–2878. [Google Scholar] [CrossRef]
  16. Raghavendra, U.; Fujita, H.; Gudigar, A.; Shetty, R.; Nayak, K.; Pai, U.; Samanth, J.; Acharya, U. Automated technique for coronary artery disease characterization and classification using DD-DTDWT in ultrasound images. Biomed. Signal Process. Control 2018, 40, 324–334. [Google Scholar] [CrossRef]
  17. Gudigar, A.; Raghavendra, U.; Devasia, T.; Nayak, K.; Danish, S.M.; Kamath, G.; Samanth, J.; Pai, U.M.; Nayak, V.; Tan, R.S.; et al. Global weighted LBP based entropy features for the assessment of pulmonary hypertension. Pattern Recognit. Lett. 2019, 125, 35–41. [Google Scholar] [CrossRef]
  18. Gudigar, A.; Samanth, J.; Raghavendra, U.; Dharmik, C.; Vasudeva, A.; Padmakumar, R.; Tan, R.-S.; Ciaccio, E.J.; Molinari, F.; Acharya, U.R. Local Preserving Class Separation Framework to Identify Gestational Diabetes Mellitus Mother Using Ultrasound Fetal Cardiac Image. IEEE Access 2020, 8, 229043–229051. [Google Scholar] [CrossRef]
  19. Balaji, G.; Subashini, T.; Chidambaram, N. Detection and diagnosis of dilated cardiomyopathy and hypertrophic cardiomyopathy using image processing techniques. Eng. Sci. Technol. Int. J. 2016, 19, 1871–1880. [Google Scholar] [CrossRef] [Green Version]
  20. Sharon, J.J.; Anbarasi, L.J.; Raj, B.E. DPSO-FCM based segmentation and Classification of DCM and HCM Heart Diseases. In Proceedings of the 2018 Fifth HCT Information Technology Trends (ITT), Dubai, United Arab Emirates, 28–29 November 2018; pp. 41–46. [Google Scholar] [CrossRef]
  21. Zhang, J.; Gajjala, S.; Agrawal, P.; Tison, G.H.; Hallock, L.A.; Beussink-Nelson, L.; Lassen, M.H.; Fan, E.; Aras, M.A.; Jordan, C.; et al. Fully automated echocardiogram interpretation in clinical practice:feasibility and diagnostic accuracy. Circulation 2018, 138, 1623–1635. [Google Scholar] [CrossRef]
  22. Yu, F.; Huang, H.; Yu, Q.; Ma, Y.; Zhang, Q.; Zhang, B. Artificial intelligence-based myocardial texture analysis in etiological differentiation of left ventricular hypertrophy. Ann. Transl. Med. 2021, 9, 108. [Google Scholar] [CrossRef] [PubMed]
  23. Sharma, A.; Agrawal, M.; Roy, S.D.; Gupta, V.; Vashisht, P.; Sidhu, T. Deep learning to diagnose Peripapillary Atrophy in retinal images along with statistical features. Biomed. Signal Process. Control 2020, 64, 102254. [Google Scholar] [CrossRef]
  24. Raghavendra, U.; Fujita, H.; Bhandary, S.V.; Gudigar, A.; Tan, J.H.; Acharya, U.R. Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images. Inf. Sci. 2018, 441, 41–49. [Google Scholar] [CrossRef]
  25. Acharya, U.R.; Fujita, H.; Oh, S.L.; Raghavendra, U.; Tan, J.H.; Adam, M.; Gertych, A.; Hagiwara, Y. Automated identification of shockable and non-shockable life-threatening ventricular arrhythmias using convolutional neural network. Futur. Gener. Comput. Syst. 2018, 79, 952–959. [Google Scholar] [CrossRef]
  26. Tan, J.H.; Bhandary, S.; Sivaprasad, S.; Hagiwara, Y.; Bagchi, A.; Raghavendra, U.; Rao, A.K.; Raju, B.; Shetty, N.S.; Gertych, A.; et al. Age-related Macular Degeneration detection using deep convolutional neural network. Future Gener. Comput. Syst. 2018, 87, 127–135. [Google Scholar] [CrossRef]
  27. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  28. Jabid, T.; Kabir, H.; Chae, O. Gender Classification Using Local Directional Pattern (LDP). In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2162–2165. [Google Scholar] [CrossRef]
  29. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  30. Keys, R. Cubic convolution interpolation for digital image processing. IEEE Trans. Acoust. Speech Signal Process. 1981, 29, 1153–1160. [Google Scholar] [CrossRef] [Green Version]
  31. ImageNet. Available online: http://www.image-net.org/ (accessed on 17 March 2021).
  32. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adeli, H. Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals. Comput. Biol. Med. 2018, 100, 270–278. [Google Scholar] [CrossRef]
  33. Chu, Y.; Yue, X.; Yu, L.; Sergei, M.; Wang, Z. Automatic Image Captioning Based on ResNet50 and LSTM with Soft Attention. Wirel. Commun. Mob. Comput. 2020, 2020, 8909458. [Google Scholar] [CrossRef]
  34. Chougrad, H.; Zouaki, H.; Alheyane, O. Deep Convolutional Neural Networks for breast cancer screening. Comput. Methods Programs Biomed. 2018, 157, 19–30. [Google Scholar] [CrossRef]
  35. Theckedath, D.; Sedamkar, R.R. Detecting Affect States Using VGG16, ResNet50 and SE-ResNet50 Networks. SN Comput. Sci. 2020, 1, 79. [Google Scholar] [CrossRef] [Green Version]
  36. Altaf, F.; Islam, S.M.S.; Janjua, N.K. A novel augmented deep transfer learning for classification of COVID-19 and other thoracic diseases from X-rays. Neural Comput. Appl. 2021, 33, 14037–14048. [Google Scholar] [CrossRef] [PubMed]
  37. Al-Antari, M.A.; Han, S.-M.; Kim, T.-S. Evaluation of deep learning detection and classification towards computer-aided diagnosis of breast lesions in digital X-ray mammograms. Comput. Methods Programs Biomed. 2020, 196, 105584. [Google Scholar] [CrossRef] [PubMed]
  38. Luo, W.; Liu, J.; Huang, Y.; Zhao, N. An effective vitiligo intelligent classification system. J. Ambient Intell. Humaniz. Comput. 2020, 1–10. [Google Scholar] [CrossRef]
  39. Hong, J.; Cheng, H.; Zhang, Y.-D.; Liu, J. Detecting cerebral microbleeds with transfer learning. Mach. Vis. Appl. 2019, 30, 1123–1133. [Google Scholar] [CrossRef]
  40. Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R.; Hurst, R.T.; Kendall, C.B.; Gotway, M.B.; Liang, J. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? IEEE Trans. Med. Imaging 2016, 35, 1299–1312. [Google Scholar] [CrossRef] [Green Version]
  41. Jason, Y.; Jeff, C.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? Adv. Neural Inf. Processing Syst. (NIPS) 2014, 27. [Google Scholar]
  42. Zhou, N.; Wang, L. A Modified T-test Feature Selection Method and Its Application on the HapMap Genotype Data. Genom. Proteom. Bioinform. 2007, 5, 242–249. [Google Scholar] [CrossRef] [Green Version]
  43. Aslam, M.W.; Zhu, Z.; Nandi, A.K. Feature generation using genetic programming with comparative partner selection for diabetes classification. Expert Syst. Appl. 2013, 40, 5402–5412. [Google Scholar] [CrossRef]
  44. Glen, S. T Test (Student’s T-Test): Definition and Examples. 2020. Available online: https://www.statisticshowto.com/probability-and-statistics/t-test/ (accessed on 6 December 2020).
  45. Specht, D.F. Probabilistic neural networks. Neural. Netw. 1990, 3, 109–118. [Google Scholar] [CrossRef]
  46. Kecman, V. Learning and Soft Computing; MIT Press: Cambridge, UK, 2001. [Google Scholar]
  47. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning, 2nd ed.; Springer: New York, NY, USA, 2008. [Google Scholar]
  48. Christianini, N.; Shawe-Taylor, J. An introduction to Support Vector Machines and Other Kernel-Based Learning Methods; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  49. Han, J.; Pei, J.; Kamber, M. Data Mining: Concepts and Techniques; Elsevier: Amsterdam, The Netherland, 2011. [Google Scholar]
  50. Acharya, U.R.; Fujita, H.; Sudarshan, V.K.; Sree, V.S.; Eugene, L.W.J.; Ghista, D.N.; Tan, R.S. An integrated index for detection of Sudden Cardiac Death using Discrete Wavelet Transform and nonlinear features. Knowl.-Based Syst. 2015, 83, 149–158. [Google Scholar] [CrossRef]
  51. Ghista, D.N. Nondimensional physiological indices for medical assessment. J. Mech. Med. Biol. 2009, 9, 643–669. [Google Scholar] [CrossRef]
  52. Acharya, U.R.; Raghavendra, U.; Fujita, H.; Hagiwara, Y.; Koh, J.E.; Hong, T.J.; Sudarshan, V.K.; Vijayananthan, A.; Yeong, C.H.; Gudigar, A.; et al. Automated characterization of fatty liver disease and cirrhosis using curvelet transform and entropy features extracted from ultrasound images. Comput. Biol. Med. 2016, 79, 250–258. [Google Scholar] [CrossRef] [PubMed]
  53. Fujita, H.; Acharya, U.R.; Sudarshan, V.K.; Ghista, D.N.; Sree, S.V.; Eugene, L.W.J.; Koh, J.E. Sudden cardiac death (SCD) prediction based on nonlinear heart rate variability features and SCD index. Appl. Soft Comput. 2016, 43, 510–519. [Google Scholar] [CrossRef]
  54. Acharya, U.R.; Fujita, H.; Sudarshan, V.K.; Mookiah, M.R.K.; Koh, J.E.; Tan, J.H.; Hagiwara, Y.; Chua, C.K.; Junnarkar, S.P.; Vijayananthan, A.; et al. An integrated index for identification of fatty liver disease using radon transform and discrete cosine transform features in ultrasound images. Inf. Fusion 2016, 31, 43–53. [Google Scholar] [CrossRef]
  55. Raghavendra, U.; Acharya, U.R.; Ng, E.Y.K.; Tan, J.-H.; Gudigar, A. An integrated index for breast cancer identification using histogram of oriented gradient and kernel locality preserving projection features extracted from thermograms. Quant. Infrared Thermogr. J. 2016, 13, 195–209. [Google Scholar] [CrossRef]
  56. Sharma, R.; Pachori, R.B.; Acharya, U.R. An Integrated Index for the Identification of Focal Electroencephalogram Signals Using Discrete Wavelet Transform and Entropy Measures. Entropy 2015, 17, 5218–5240. [Google Scholar] [CrossRef] [Green Version]
  57. Raghavendra, U.; Acharya, U.R.; Gudigar, A.; Tan, J.H.; Fujita, H.; Hagiwara, Y.; Molinari, F.; Kongmebhol, P.; Ng, K.H. Fusion of spatial gray level dependency and fractal texture features for the characterization of thyroid lesions. Ultrasonics 2017, 77, 110–120. [Google Scholar] [CrossRef]
  58. Pham, T.-H.; Raghavendra, U.; Koh, J.E.W.; Gudigar, A.; Chan, W.Y.; Hamid, M.T.R.; Rahmat, K.; Fadzli, F.; Ng, K.H.; Ooi, C.P.; et al. Development of breast papillary index for differentiation of benign and malignant lesions using ultrasound images. J. Ambient Intell. Humaniz. Comput. 2020, 12, 2121–2129. [Google Scholar] [CrossRef]
  59. Acharya, U.R.; Mookiah, M.R.K.; Koh, J.E.; Tan, J.H.; Noronha, K.; Bhandary, S.; Rao, A.K.; Hagiwara, Y.; Chua, C.K.; Laude, A. Novel risk index for the identification of age-related macular degeneration using radon transform and DWT features. Comput. Biol. Med. 2016, 73, 131–140. [Google Scholar] [CrossRef]
  60. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. NIPS 2012, 60, 84–90. [Google Scholar] [CrossRef]
  61. Redmon, J. 2013–2016 “Darknet: Open Source Neural Networks in C”. Available online: https://pjreddie.com/darknet (accessed on 13 October 2021).
  62. BVLC GoogLeNet Model. Available online: https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet (accessed on 13 October 2021).
  63. Raghavendra, U.; Gudigar, A.; Rao, T.N.; Rajinikanth, V.; Ciaccio, E.J.; Yeong, C.H.; Satapathy, S.C.; Molinari, F.; Acharya, U.R. Feature-versus deep learning-based approaches for the automated detection of brain tumor with magnetic resonance images: A comparative study. Int. J. Imaging Syst. Technol. 2021, 32, 501–516. [Google Scholar] [CrossRef]
  64. Vafaeezadeh, M.; Behnam, H.; Hosseinsabet, A.; Gifani, P. A deep learning approach for the automatic recognition of prosthetic mitral valve in echocardiographic images. Comput. Biol. Med. 2021, 133, 104388. [Google Scholar] [CrossRef] [PubMed]
  65. He, H.; Bai, Y.; Garcia, E.A.; Li, S. ADASYN: Adaptive synthetic sampling approach for im-balanced learning. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 1322–1328. [Google Scholar]
  66. van der Maaten, L.; Hinton, G. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  67. Theodoridis, S.; Koutroumbas, K. Pattern Recognition; Academic Press: San Diego, CA, USA, 1999; pp. 341–342. [Google Scholar]
  68. Liu, H.; Motoda, H. Feature Selection for Knowledge Discovery and Data Mining. Kluwer International Series in Engineering and Computer Science 454; Kluwer Academic Publishers: Boston, MA, USA, 1998. [Google Scholar]
Figure 1. Coronal sections of the heart depicting morphological differences in normal versus hypertrophic cardiomyopathy.
Figure 1. Coronal sections of the heart depicting morphological differences in normal versus hypertrophic cardiomyopathy.
Jimaging 08 00102 g001
Figure 2. Example apical 4-chamber US images of normal versus hypertrophic cardiomyopathy participants.
Figure 2. Example apical 4-chamber US images of normal versus hypertrophic cardiomyopathy participants.
Jimaging 08 00102 g002
Figure 3. Schema of the deep-features-based proposed architecture.
Figure 3. Schema of the deep-features-based proposed architecture.
Jimaging 08 00102 g003
Figure 4. Preprocessed image derived by creating a mask.
Figure 4. Preprocessed image derived by creating a mask.
Jimaging 08 00102 g004
Figure 5. Computation of local directional pattern. Kirsch mask responses (M0, M1, M2, … M7) are obtained for a central pixel within the image with eight Kirsch masks that are rotated through the eight compass directions (east, north-east, north, … south-east). The neighboring pixel with the maximum Kirsch mask output determined the local direction. To form the local directional pattern, all eight neighboring pixels are sorted by the output (M0, M1, M2, … M7), and “one” and “zero” assigned to the highest and lowest four pixels, respectively (b0, b1, b2, … b7). The process is repeated throughout the image.
Figure 5. Computation of local directional pattern. Kirsch mask responses (M0, M1, M2, … M7) are obtained for a central pixel within the image with eight Kirsch masks that are rotated through the eight compass directions (east, north-east, north, … south-east). The neighboring pixel with the maximum Kirsch mask output determined the local direction. To form the local directional pattern, all eight neighboring pixels are sorted by the output (M0, M1, M2, … M7), and “one” and “zero” assigned to the highest and lowest four pixels, respectively (b0, b1, b2, … b7). The process is repeated throughout the image.
Jimaging 08 00102 g005
Figure 6. Preprocessed 4-chamber heart images and the corresponding LDP images.
Figure 6. Preprocessed 4-chamber heart images and the corresponding LDP images.
Jimaging 08 00102 g006
Figure 7. Feature generation blocks of the proposed method.
Figure 7. Feature generation blocks of the proposed method.
Jimaging 08 00102 g007
Figure 8. Feature distribution before and after indexing.
Figure 8. Feature distribution before and after indexing.
Jimaging 08 00102 g008
Figure 9. Distributions of the highest-ranked features obtained from the various methods.
Figure 9. Distributions of the highest-ranked features obtained from the various methods.
Jimaging 08 00102 g009
Figure 10. Index application to RNet18, ANet, and GNet features.
Figure 10. Index application to RNet18, ANet, and GNet features.
Jimaging 08 00102 g010
Figure 11. Classification using proposed IIHCM methodology with the LDPRes870 feature on divided training (70%) and testing data (30%).
Figure 11. Classification using proposed IIHCM methodology with the LDPRes870 feature on divided training (70%) and testing data (30%).
Jimaging 08 00102 g011
Figure 12. (a) Using complete feature space, and (b) using the 10 most significant LDP-RNet50 features.
Figure 12. (a) Using complete feature space, and (b) using the 10 most significant LDP-RNet50 features.
Jimaging 08 00102 g012
Figure 13. Receiver operating characteristic (ROC) curve obtained for the proposed approach.
Figure 13. Receiver operating characteristic (ROC) curve obtained for the proposed approach.
Jimaging 08 00102 g013
Table 1. Ranked Features with Means, Standard Deviations (SD), and p- and t-Values.
Table 1. Ranked Features with Means, Standard Deviations (SD), and p- and t-Values.
FeaturesNormalHCMp-Valuet-Value
MeanSDMeanSD
LDPRes8701.09060.12851.74010.15855.1 × 10−6528.6105
LDPRes17311.61940.37062.78800.34721.58 × 10−4520.0121
LDPRes13132.1210.31181.30680.27491.57 × 10−3716.9102
LDPRes17010.80780.13470.49980.09986.62 × 10−3415.5594
LDPRes11000.98530.11071.26220.11605.59 × 10−3315.2183
LDPRes540.06750.03360.15840.04384.1 × 10−3214.9010
LDPRes1101.17680.26281.87760.35619.59 × 10−3114.4011
LDPRes13510.71010.21330.30760.11337.92 × 10−2913.7051
LDPRes2230.17860.06060.06340.03511.58 × 10−2813.5969
LDPRes7702.42660.29201.82580.29514.83 × 10−2612.6988
Table 2. Architecture details of the various deep-learning methods used in this work.
Table 2. Architecture details of the various deep-learning methods used in this work.
ParametersRNet50RNet18ANetDNetGNet
Input image size224 × 224 × 3224 × 224 × 3227 × 227 × 3256 × 256 × 3224 × 224 × 3
No. of deep layers501881922
Output layer‘avg pool’‘pool5′‘pool5′‘avg1′‘pool5-7 × 7′
No. of features1 × 20481 × 5121 × 40961 × 10001 × 1024
Table 3. Performance of various methods.
Table 3. Performance of various methods.
MethodsAcc. (%)Sen. (%)Spe. (%)PPV (%)F-Score
LDP-RNet1895.1298.3894.0591.040.9456
LDP-RNet501001001001001
LDP-ANet87.1182.2590.0983.600.8291
LDP-DNet84.0483.8784.1576.470.7999
LDP-GNet93.2590.3295.0491.800.9105
Table 4. Highest-ranked features using the various deep-learning approaches.
Table 4. Highest-ranked features using the various deep-learning approaches.
FeaturesNormalHCMp-Valuet-Value
MeanSDMeanSD
LDP-RNet18
LDPRes4920.6966620.1551031.2168850.2313072.98 × 10−3817.18295
LDP-RNet50
LDPRes8701.09060.12851.74010.15855.1 × 10−6528.6105
LDP-ANet
LDPAlex18520.1944770.3932061.134970.6891041.31 × 10−2111.09695
LDP-DNet
LDPDark616−0.217080.421096−1.020860.4022453.41 × 10−2412.03204
LDP-GNet
LDPGoogLe9020.5475270.4570991.8902540.7905676.12 × 10−2913.74575
Table 5. Highest-ranked features using various ranking methods.
Table 5. Highest-ranked features using various ranking methods.
Features Using Various MethodsNormalHCM
MeanSDMeanSDp-Value
Entropy
LDPRes170.0001950.001399000.274759
Bhattacharyya
LDPRes170.0001950.001399000.274759
ROC
LDPRes28000.0009520.0048950.052053
Wilcoxon
LDPRes8701.0906770.1285621.7401150.1585875.1 × 10−65
Table 6. Summary of state-of-the-art work using echocardiogram images/videos.
Table 6. Summary of state-of-the-art work using echocardiogram images/videos.
PaperMethodResultDataset
[19]PCA + BPNNAccuracy = 92.04%
(normal and abnormal (DCM and HCM))
Echocardiogram
videos: 60
[20]DPSO-FCM + GLCM and DCT + SVMFor segmentation accuracy: 95%
For classification accuracy: 90%
Echocardiogram videos:
DCM: 40,
HCM: 40,
normal: 10
[21]Multilayer CNNC statistics: 0.93 (for HCM)HCM: 495 studies to train the model
[22]First-order statistics + GLCM + SVMStudied possible texture myocardial features with
p-value < 0.05
Transthoracic echocardiography images:
HCM, uremic cardiomyopathy, and hypertensive heart disease (50 cases for each group)
OursLDP + ResNet-50 + ADASYN + IIHCMAccuracy: 100%Echocardiography images
Normal: 101
HCM: 97
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gudigar, A.; Raghavendra, U.; Samanth, J.; Dharmik, C.; Gangavarapu, M.R.; Nayak, K.; Ciaccio, E.J.; Tan, R.-S.; Molinari, F.; Acharya, U.R. Novel Hypertrophic Cardiomyopathy Diagnosis Index Using Deep Features and Local Directional Pattern Techniques. J. Imaging 2022, 8, 102. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8040102

AMA Style

Gudigar A, Raghavendra U, Samanth J, Dharmik C, Gangavarapu MR, Nayak K, Ciaccio EJ, Tan R-S, Molinari F, Acharya UR. Novel Hypertrophic Cardiomyopathy Diagnosis Index Using Deep Features and Local Directional Pattern Techniques. Journal of Imaging. 2022; 8(4):102. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8040102

Chicago/Turabian Style

Gudigar, Anjan, U. Raghavendra, Jyothi Samanth, Chinmay Dharmik, Mokshagna Rohit Gangavarapu, Krishnananda Nayak, Edward J. Ciaccio, Ru-San Tan, Filippo Molinari, and U. Rajendra Acharya. 2022. "Novel Hypertrophic Cardiomyopathy Diagnosis Index Using Deep Features and Local Directional Pattern Techniques" Journal of Imaging 8, no. 4: 102. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8040102

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop