Next Article in Journal
Assessment of Annual Composite Images Obtained by Google Earth Engine for Urban Areas Mapping Using Random Forest
Next Article in Special Issue
Spectral and Spatial Global Context Attention for Hyperspectral Image Classification
Previous Article in Journal
Phase Center Corrections for BDS IGSO and MEO Satellites in IGb14 and IGSR3 Frame
Previous Article in Special Issue
High-Resolution SAR Image Classification Using Multi-Scale Deep Feature Fusion and Covariance Pooling Manifold Network
 
 
Article
Peer-Review Record

Hyperspectral Image Spectral–Spatial Classification Method Based on Deep Adaptive Feature Fusion

by Caihong Mu 1, Yijin Liu 1 and Yi Liu 2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Submission received: 23 January 2021 / Revised: 15 February 2021 / Accepted: 16 February 2021 / Published: 18 February 2021
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Classification)

Round 1

Reviewer 1 Report

The presentation of the paper is good, and the scientific methods used in the paper are sound. Only very minor text editing is required as follows:

  1. line 147 :  ' ε is a regularization ---'  --> 'where ε is a regularization ---'
  2. line 150 :  ' μk  and  ---'  --> 'where μk  and  ---'

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The overall research design is appropriate; materials and methods are clearly described. Minor spell check is required

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

Please see PDF attached.

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 4 Report

The paper is well-organized and the topic is a hot area of research. Here are some of my concerns: 

  • Why the authors used PCA? The denoising autoencoder or other deep architectures can be used for data dimension reduction. Did the authors explore this possibility?
  • How did the Authors set the parameters of the networks? Filter size, number of layers, etc. Is this an optimized architecture? 
  • In Table 4, it has been shown that the accuracy in some classes is 100. But, in some others not. What metrics and criteria can distinguish the slight difference in accuracy? (in terms of visual assessments)
  • Figure 11 shows the class map of different methods. The last 3 methods are very competitive with each other. How the authors distinguish these three methods? 

The paper is written very well and the language is good. 

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Back to TopTop