Next Article in Journal
Land Subsidence in Wuhan Revealed Using a Non-Linear PSInSAR Approach with Long Time Series of COSMO-SkyMed SAR Data
Next Article in Special Issue
Thermal Imagery Feature Extraction Techniques and the Effects on Machine Learning Models for Smart HVAC Efficiency in Building Energy
Previous Article in Journal
SNR Enhancement of Back Scattering Signals for Bistatic Radar Based on BeiDou GEO Satellites
Previous Article in Special Issue
Compression of Remotely Sensed Astronomical Image Using Wavelet-Based Compressed Sensing in Deep Space Exploration
 
 
Article
Peer-Review Record

Robust Classification Technique for Hyperspectral Images Based on 3D-Discrete Wavelet Transform

by R Anand 1,*, S Veni 2 and J Aravinth 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Submission received: 1 February 2021 / Revised: 13 March 2021 / Accepted: 16 March 2021 / Published: 25 March 2021
(This article belongs to the Special Issue Wavelet Transform for Remote Sensing Image Analysis)

Round 1

Reviewer 1 Report

A method of classifying hyperspectral image datasets is proposed (3D-Discrete Wavelet Transform) which is shown to improve the classifiers SVM and KNN. The extra computation time induced by the 3D-DWT needs to be compared against the computation times of SVM and KNN without using it.

The relative advantage of the proposed framework (3D-DWT) should also be addressed and justified by comparing it with other approaches; for example, Generative Adversarial Networks (GANs) achieve very high accuracy with a small number of samples, which is often the case in HSI. Please compare your results with Table 3 of the following paper:

https://0-www-mdpi-com.brum.beds.ac.uk/2072-4292/13/2/198/htm

The paper requires an extensive grammar check. Below is a list of errors up to Section 3, beyond which authors can proofread themselves. 

39-40
Data investigation must usually adapt only similar design, i.e., ? = ?1, ?2, ?3, . , ??, which is, in between an enormous set of attributes ? to N lesser attributes of sample [1]. -> check needed

78 (missing article) Remaining work -> The remaining work

132 (missing article) In this work, model 

143 Hyperspectral image -> A hyperspectral image

151 This creates four sub-bands in the in the first stage

153 down sampled -> downsampled

161 to extract feature -> features

165-166
Wavelet transform 166 is a best mathematical tool for performing time – frequency analysis and has wider spectrum in image compression and noise removal technique

170 where, a is scaling parameter -> where a is the scaling parameter

171 all the basis function as prototype. -> all the basis functions as prototypes.

175 were,?0,?0 are dyadic scaling and shifting parameter. In multistage analysis
-> where ?0 and ?0 are the dyadic scaling parameter and the shifting parameter, respectively. In multistage analysis 

177 the equation 3 -> Equation 3

179 were, -> where

200, 203 sub band -> sub-band

207 many decision tree -> many decision trees

211 At each split new feature are evaluated. -> features

212 The random forest has taken more times to compute the process to validate the results, but performance was good
-> The random forest has taken more time to compute the process to validate the results, but its performance was good

231 SVM [32] model is a supervised learning method that look at data and sorts into -> SVM [32] model is a supervised learning method that looks at data and sorts it into

236 where, “?”,”?”,”?” indicate -> where “?”,“?”, and “?” indicate

237 “M” is the constraints to get perfect margin. -> “M” is the constraint to get the perfect margin.

239-240 force the distance equal or exceed constraints M. -> force the distance to equal or exceed constraint M.

Eq. 9

243 to avoid wrong classification -> the wrong classification

245 the classes are right side -> the classes are on the right side 

247 it may be in other side of -> it may be on the other side of

Author Response

Thank you for giving me the opportunity to submit a revised draft of my manuscript titled “Robust classification technique for Hyperspectral images based on 3D- Discrete Wavelet Transform” to MDPI Remote sensing Journal. We appreciate the time and effort that you and the reviewers have dedicated to providing your valuable feedback on my manuscript. We are grateful to the reviewers for their insightful comments on my paper. We have been able to incorporate changes to reflect most of the suggestions provided by the reviewers. We have highlighted the changes within the manuscript. Here is a point-by-point response to the reviewers’ comments and concerns.

Author Response File: Author Response.docx

Reviewer 2 Report

Although the work's objectives are well outlined and the results quite clear and original, the way these are described and developed needs significant improvement. In particular, there is a lack of balance between the description of the wavelet transforms (in some places obscure if not incorrect) and the classification methods used, which are not adequately introduced (again, there are some statements that sound incorrect).

Comments for author File: Comments.pdf

Author Response

Thank you for giving me the opportunity to submit a revised draft of my manuscript titled “Robust classification technique for Hyperspectral images based on 3D- Discrete Wavelet Transform” to MDPI Remote sensing Journal. We appreciate the time and effort that you and the reviewers have dedicated to providing your valuable feedback on my manuscript. We are grateful to the reviewers for their insightful comments on my paper. We have been able to incorporate changes to reflect most of the suggestions provided by the reviewers. We have highlighted the changes within the manuscript. Here is a point-by-point response to the reviewers’ comments and concerns.

Author Response File: Author Response.docx

Reviewer 3 Report

This paper proposed a 3D-discrete wavelet transform technique for hyperspectral images classification. This method includes three stages to extract spatial resolution in the first two stages and extract the spectral resolution in the third stage. Then, the extracted features are fed to some classifiers. Some experiments are conducted on two public hyperspectral data sets to demonstrate the effectiveness of the proposed feature extraction method. The main comments are

  1. In abstract, the motivation is unclear to explain the reason that proposes the feature extraction method.
  2. Some advanced feature extraction methods (such as 10.1109/TGRS.2020.2963848 and 10.1109/LGRS.2019.2944970) should be mentioned to support the research of feature extraction method in this paper.
  3. The contributions are unclear to represent the innovation of this paper in introduction.
  4. In lines 163, 165, 166, there are many confusions about the notations, such as the band number has λ and b, y denotes what. In eq (2), it also has b. Please give more clear notations.
  5. The title for section 4 should be experiments which it is the same as section 3.
  6. More discrete wavelet transform methods should be used for comparison to demonstrate the effectiveness of the proposed method.
  7. This paper just uses the 3D discrete wavelet transform to extract features of hyperspectral image, in addition, the experiment comparison is unreasonable. The innovation is lacked to be published in this journal.

Author Response

Thank you for giving me the opportunity to submit a revised draft of my manuscript titled “Robust classification technique for Hyperspectral images based on 3D- Discrete Wavelet Transform” to MDPI Remote sensing Journal. We appreciate the time and effort that you and the reviewers have dedicated to providing your valuable feedback on my manuscript. We are grateful to the reviewers for their insightful comments on my paper. We have been able to incorporate changes to reflect most of the suggestions provided by the reviewers. We have highlighted the changes within the manuscript. Here is a point-by-point response to the reviewers’ comments and concerns.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report


Thank you for the revision. The comparison with GANs is useful.

Between line 200 and line 201, there is a paragraph without line numbers where you might check capitalization and punctuation again.
e.g., (1) Fig. 2 Demonstrates -> demonstrates (2) Generally, Haar wavelet has, l(d)=(....), and h(d)=(...), (many commas/fragment)

The overall image resolution of Fig 3 must be enhanced. The legend of Fig 7 is illegible. Confusion matrices in Figs 9 and 11 should have higher resolutions and/or larger fonts.

Author Response

Respected Editor:

 

Thank you for giving me the opportunity to submit a revised draft of my manuscript titled “Robust classification technique for Hyperspectral images based on 3D- Discrete Wavelet Transform” to MDPI Remote sensing Journal. We appreciate the time and effort that you and the reviewers have dedicated to providing your valuable feedback on my manuscript. We are grateful to the reviewers for their insightful comments on my paper. We have been able to incorporate changes to reflect most of the suggestions provided by the reviewers. We have highlighted the changes within the manuscript. Here is a point-by-point response to the reviewers’ comments and concerns.

Comments from Reviewer 1:

  1. Comments 1: Between line 200 and line 201, there is a paragraph without line numbers where you might check capitalization and punctuation again.

Response:  Thank you for pointing this out. We agree with this comment. We included line numbers and removed unwanted punctuation and capitalization.

 

  1. Comments 2: The overall image resolution of Fig 3 must be enhanced.  The legend of Fig 7 is illegible. Confusion matrices in Figs 9 and 11 should have higher resolutions and/or larger fonts.

Response:   We have, accordingly, revised to emphasize this point. We changed the image resolution based on comments and attached for your reference.

 

 

 

The comments and typos mentioned by all three reviewers are corrected and incorporated in the manuscript.

 

Author Response File: Author Response.docx

Reviewer 2 Report

Comment 1: Average accuracy (or balanced accuracy) could be more appropriate to evaluate algorithm performances with unbalanced data sets.

 

Comment 2: I think that all the results reported in Tables 2 and 4 are incorrect. You computed each class accuracy by summing over the columns, but, even according to Eq. 12, it has to be calculated by mii/N, where N is the number of elements for the class (summation over rows).

 

Comment 3: Figs. 10 and 13 show that the algorithms with the highest performance do not show a significant trend by varying the number of training samples. I think this behaviour is due to the fact you are dealing with an unbalanced data set. In this case, it could be more appropriate to train ML algorithms by using the ‘Stratified Cross-Validation’ technique; this ensures maintaining each fold the class ratio you have in the whole data set. The same behaviour is highlighted by the confusion matrix, where most of the misclassified samples are focused around the classes with the highest number of samples.

 

 

 

Here’s the specific comments, suggested corrections and typos.

 

Rows 16-20: I suggest to invert the order of the period starting at Row 16 with the subsequent one.  At row 17 you introduce “classifiers” without specifying nothing else before that point, it is not so clear

 

Row 24: “has been observed” → “it has been observed”

 

Rows 34-37: many described features in a small space. Consider to extend and merge them better slightly.

 

Row 47: “Discrete Wavelet Transform” → “Discrete Wavelet Transform (DWT)”.

 

Row 48-49: “Ghazali, et al.” → “Ghazali et al.”

 

Row 50: “Chang et. al,” → “Chang et al.”

 

Row 67: “A Lower” → “A lower”

 

Row 67: “in minimized” → “is minimized”

 

Row 84: “Suitable” →”suitable”

 

Row 99: “to the solve” → “to solve”

 

Row 141: “model has” → “the model has”

 

Row 166: “These basis function” →”These basis functions”

 

Row 167: “square” →”square integrable”?

 

Row 166-168: too much cryptic, please rephrase

 

Row 173: “are applied to the classifiers Random forest [33], KNN [33], and SVM [32] and ” →  “are applied to the Random Forest [33], KNN [33], and SVM [32] classifiers, and ”

 

Row 175: “without wavelet features” →”without the use of the wavelet transform”

 

Figure 1: “?=??” in the center of the figure (almost unrecognisable).

 

Row 184: “”w”  →  “w”

 

Row 189: “Where,” → “where”

 

Row 191: “The translation parameter or the shifting parameter” →”The translation or shifting parameter”

 

Row 192:”frequency” →”scale”

 

Row 6-8 after row 200 on page 5: Please use more words to explain: why using different bases? How are they different? How do the results change?

 

Last two rows on page 5: “which as shown“ →”, as shown“

 

Row 5 page 6: “Fig. 2 Demonstrates” →”Fig. 2 demonstrates”

 

Figure 2: Some typos in the diagram (“apprxoimatex”). If you are using a standard dyadic decomposition, the second level applies to the LLL element of the first level. If you are using the wavedec3 MATLAB function(please refer to the corresponding help), it easy to show that by using two different decompositions, one at the first level (W1) and another at the first two levels (W2), the elements 2-8 of W1.dec are equal to the elements 9-16 of W2dec. It means that elements 2-8 of W2.dec are computed on element 1 of W1.dec. This means that the U1 element of the diagram is wrong. U1 is the LLL element of the second level decomposition.

 

Rows 291-299: I think the explanation of KNN was more clear in the previous version. Maybe I haven’t explained my statement correctly there. The previous explanation of KNN was sufficiently clear, but I cannot understand which value of “k” you choose for your implementation.

 

Row 306-307: “. for examples” →”. For example”

 

Row 318: “In equation 9, The” →”In equation 9, the”

 

Equation 9: it is quite different from the previous version. The “y” should be placed in front of the LHS member, and it misses the margin “M”.

 

 

Row 320-321: “that compensates specific vectors the wrong classification” is not clear

 

Row 333: “A Certain” →”A certain”

 

Row 341: “such as,” →”such as”

 

Row 355: “where,” →”where”

 

Eqs. 11 and 12 have the same letter but a different formulation.

 

Row 360: “represents” →”represent”

 

Row 375: “the parameters consider” →”The parameters considered”

 

Row 381:”highlighted” →”are highlighted”

           

Rows 383-384: “As shown in Table 1, 6132 training samples...”. Probably you mean 12602.

 

Row 385:”11.2%”→”14.5%”

 

Incoherence between ‘Confusion matrix’ and Tab. 1 → Class 3 no. of samples 333~=324

                                                                                             Class 4 93~=94

                                                                                             Class 5 194~=193

 

Table 2: Columns 3 and 4, Labels 9 → 0.12 and 0.24. Are these typos?

 

Row 407:”table 3”. The table3 is empty (there is only the caption, row 415). Please control all the subsequent references to table 3 and table 4.

 

The word ‘data set’ is sometimes written as ‘dataset’. Please check the coherence.

Author Response

Respected Editor:

 

Thank you for giving me the opportunity to submit a revised draft of my manuscript titled “Robust classification technique for Hyperspectral images based on 3D- Discrete Wavelet Transform” to MDPI Remote sensing Journal. We appreciate the time and effort that you and the reviewers have dedicated to providing your valuable feedback on my manuscript. We are grateful to the reviewers for their insightful comments on my paper. We have been able to incorporate changes to reflect most of the suggestions provided by the reviewers. We have highlighted the changes within the manuscript. Here is a point-by-point response to the reviewers’ comments and concerns.

Comments from Reviewer 2:

  1. Comment 1: Average accuracy (or balanced accuracy) could be more appropriate to evaluate algorithm performances with unbalanced data sets.

Response:  Thank you for pointing this out. We agree with this comment. Based on your comments we added figures for Average Accuracy for different ratios of training and testing samples.

Figure 14: Average Accuracy of Different ratios of Indian Pines Training Samples

Figure 14: Average Accuracy of Different ratios of Salinas Scene Training Samples

 

 

  1. Comments 2:  I think that all the results reported in Tables 2 and 4 are incorrect. You computed each class accuracy by summing over the columns, but, even according to Eq. 12, it has to be calculated by mii/N, where N is the number of elements for the class (summation over rows).

 

Response: Thank you for pointing this out. We agree with this comment. . A sample calculation for computing AA is given as follow, for we consider True class Label 2, the total number of test sample of class 2 is 571, the predicted output of SVM of class 2 is 512, so the , which is shown in table 2 (Third row fourth column). In Similar way, we evaluated all the average accuracy of each class. Table 3 shows the OA, AA and Kappa coefficient of entire class with different algorithms with help of equation 12.

 

 

 

 

 

 

 

where  is the total test samples in class  and it varies form

 

  1. Comment 3: Figs. 10 and 13 show that the algorithms with the highest performance do not show a significant trend by varying the number of training samples. I think this behaviour is due to the fact you are dealing with an unbalanced data set. In this case, it could be more appropriate to train ML algorithms by using the ‘Stratified Cross-Validation’ technique; this ensures maintaining each fold the class ratio you have in the whole data set. The same behavior is highlighted by the confusion matrix, where most of the misclassified samples are focused on the classes with the highest number of samples.

Response: Thank you for pointing this out. We agree with this comment. We add this below paragraph in our paper (line no: 422-430).  All the machine learning algorithms were validated by K-cross validation(K=10). Splitting the training dataset into k folds is part of the k-fold cross-validation process. The first k-1 folds are used to train a model, while the remaining kth fold serves as a test range. In each fold, we can use a variant of k-fold cross-validation that maintains the imbalanced class distribution. It is known as stratified k-fold cross-validation, and it ensures that the class distribution in each split of the data is consistent with the distribution of the entire training dataset. It also emphasizes the importance of using stratified k-fold cross-validation of imbalanced datasets to maintain the class distribution in the train and test sets for each model evaluation.

Figure 14 and 15 shows the average accuracy for different ratio of training and testing images of Indian Pines (50:50,55:45,60:40,65:35,70:30,75:25) and Salinas scene (50:50,55:45,60:40,65:35,70:30,75:25), respectively. In this if we are increasing number of training samples, increasing average accuracy of each class because both are directly proportional. From the analysis of figure 11, if we are varying different ratio of training and testing samples, the overall accuracy is not varying that much because of unbalanced dataset.

 

 The comments and typos mentioned by all three reviewers are corrected and incorporated in the manuscript.

 

 

Author Response File: Author Response.docx

Reviewer 3 Report

The previous problems have been revised, however there are still some problems:

  1. In introduction, please give some applications of HSI for the accurate analysis of the Earth surface, such as land-cover classification (10.1109/TCYB.2018.2810806), target detection (10.1109/TGRS.2020.2982406). 
  2. Fig. 1 is unclear to obtain the information, a better way about the flowchart should be given.
  3. Fig. 3 is blur, please revise.
  4. Please give some discuss about the merits and drawbacks of the proposed method according to the experiments.

Author Response

Respected Editor:

 

Thank you for giving me the opportunity to submit a revised draft of my manuscript titled “Robust classification technique for Hyperspectral images based on 3D- Discrete Wavelet Transform” to MDPI Remote sensing Journal. We appreciate the time and effort that you and the reviewers have dedicated to providing your valuable feedback on my manuscript. We are grateful to the reviewers for their insightful comments on my paper. We have been able to incorporate changes to reflect most of the suggestions provided by the reviewers. We have highlighted the changes within the manuscript. Here is a point-by-point response to the reviewers’ comments and concerns.

Comments from Reviewer 2:

  1. Comment 1: In abstract, the motivation is unclear to explain the reason that proposes the feature extraction method.

Response:  Thank you for pointing this out. We agree with this comment and we added following points and added references also. Lei J et al, suggest a new approach for the reconstruction of hyperspectral anomaly images with spectral learning (SLDR) [40]. The use of spatial-spectral data, which will revolutionize the conventional Classification strategies pose a huge obstacle in distinguishing between different forms of land use cover and classification [41].

 

  1. Comments 2: 1 is unclear to obtain the information, a better way about the flowchart should be given.

 

Response: We have, accordingly, revised to emphasize this point.

 

 

  1. Comments: Fig. 3 is blur, please revise.

Response: Respected Reviewer, we agree with this and have incorporated your suggestion throughout the manuscript

 

 

 

 

 

 

 

 

 

 

 

 

 

  1. Comments: Please give some discuss about the merits and drawbacks of the proposed method according to the experiments.

 

Response: Respected Reviewer, we agree with this and have incorporated your suggestion throughout the manuscript.  As a result, the intrinsic characteristics of 3D-DWT+SVM can be effectively represented to improve the discrimination of spectral features for HSI classification. Our future research will concentrate on how to efficiently represent spatial-spectral data and increase computational performance with the help of Deep learning algorithms.

 

 

 

 

 

Author Response File: Author Response.docx

Back to TopTop