Next Article in Journal
(Quasi-)Real-Time Inversion of Airborne Time-Domain Electromagnetic Data via Artificial Neural Network
Next Article in Special Issue
Lake Phenology of Freeze-Thaw Cycles Using Random Forest: A Case Study of Qinghai Lake
Previous Article in Journal
Reconciling Flagging Strategies for Multi-Sensor Satellite Soil Moisture Climate Data Records
Previous Article in Special Issue
Comparing the Assimilation of SMOS Brightness Temperatures and Soil Moisture Products on Hydrological Simulation in the Canadian Land Surface Scheme
 
 
Article
Peer-Review Record

Ground-based Assessment of Snowfall Detection over Land Using Polarimetric High Frequency Microwave Measurements

by Cezar Kongoli 1,2,*, Huan Meng 2, Jun Dong 1 and Ralph Ferraro 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Submission received: 10 September 2020 / Revised: 14 October 2020 / Accepted: 15 October 2020 / Published: 20 October 2020
(This article belongs to the Special Issue Satellite Hydrological Data Products and Their Applications)

Round 1

Reviewer 1 Report

Review of “Ground-based Assessment of Snowfall Detection over Land Using Polarimetric High Frequency Microwave Measurements” by Kongoli et al, (2020).

 

 

The manuscript is well organized and it can be suitable for publication in remote sensing journal upon considering the below mentioned minor points.

 

  1. Authors didn’t use the same font throughout the manuscript as per the remote sensing journal format.
  2. Page 3, line 107: expand “PMW”.
  3. Page 2, line 47: expand “ATMS”
  4. Page 4, line 164: What is the reason for considering the threshold probability value of 0.5 only?
  5. Authors should change the Fig. 2 into one figure panel with 300 dpi resolution.
  6. In Fig.6 to Fig. 8, the reflectivity maps and the color bar of reflectivity are not clear.

 

 

 

Author Response

Reviewer 1

--------------------------------------------------------------------------------------------------------------------------------------------

The manuscript is well organized and it can be suitable for publication in remote sensing journal upon considering the below mentioned minor points.

 

  1. Authors didn’t use the same font throughout the manuscript as per the remote sensing journal format.

 

Response:

We made sure font is same (10 pts) throughout paper except for the References and Tables. We are hopeful that any minor formatting issue will be resolved during production.  

 

 

  1. Page 3, line 107: expand “PMW”.

 

Response:

Expanded PMW in first instance (line 33)

 

  1. Page 2, line 47: expand “ATMS”

Response:

Done, as well as other acronyms.

 

  1. Page 4, line 164: What is the reason for considering the threshold probability value of 0.5 only?

Response:

This is a very good question.   We have responded by modifying manuscript to address the reasons behind the 0.5 probability threshold value.

 The value was selected as the one which yields maximum overall classification rate. This is now stated in the methods section, whereas the established value of 0.5 in the results section.

 Note that the predicted probabilities computed from the fitted logistic regression coefficients in eq 1 and 2 do not depend on the selected threshold probability.  The results of the comparative analysis among models and thus our interpretations hold true for other threshold probability values.

 

 

  1. Authors should change the Fig. 2 into one figure panel with 300 dpi resolution.

Done, improved.

 

  1. In Fig.6 to Fig. 8, the reflectivity maps and the color bar of reflectivity are not clear.                                                                                             Done, improved

Reviewer 2 Report

This paper studies the suitability of polarimetric high frequency microwave measurements for snowfall detection over land from a difference-based set of indicators, and compares their performance with the absolute values of others currently used. For this, a ground-based assessment is made from data collected in the USA and the results are further validated by including a qualitative retrieval case. The topic is highly relevant in the current context of both methods and applications, being detection of rain/snow-fall a key issue in the study of the hydrological response, signature, and trends in mountain and cold regions over the world, for which remote sensing is the only feasible option at medium- to large-scales. This detection can bring significant uncertainty to the hydrological calculations in many areas and improving our capability of remote quantification of snowfall occurrence is crucial to increase both the skill and performance of models and predictions.

The research design is sound and the justification of the state-of-the-art and innovation brought by this work is very complete, I have enjoyed the reading of the introduction really much. The assessment of the method against ground data is a valuable issue of the work, and the comparison of indicators is a timely analysis, for whose conclusions the retrieval example provides a supportive point.

I have some minor-moderate comments mainly related to clarify some details in the description of the work that will improve its understanding and applications beyond the results included here. Additionally, the current version of the figures requires some refining (see comments below):

  1. Despite they might be well-known, I recommend that all acronyms are fully presented the first time they appear in the text.
  2. Lines 113-114, please, specify how snowfall accumulation as SWE was reported (measured, estimated from snowfall volumes…)
  3. Lines 121-124, please, clarify a bit further the removal of rain from the no-snowfall sample
  4. Some description (global statistics) of the samples of snowfall and no-snowfall events would add relevant information for the readers (number of elements in each sample, etcetera)
  5. Lines 127-128, how was this threshold selected?
  6. Lines 148-149, please, add some reference here
  7. Line 160, please, add/clarify how the values of P were obtained to be included in the calibration of parameters in equation (1)
  8. Lines 164-168, the point is clear, but some comment on why a threshold value of 0.5 was selected and how this value impact the classification results is needed here.
  9. Line 178 (and 193), is Diff89 a typo? Otherwise, please, define here.
  10. Lines 180-182, again, please, briefly justify the selection of values as thresholds
  11. Line 188, please, describe in the previous section how SFR was obtained/measured.
  12. Figure 1’s formatting needs to be improved (refining the font size, axis labels format (integers), symbol for the data, etcetera)
  13. Table 1 should be shrinked to match edition format; I suggest to include here also the significance information referred to in the body text (lines 187-195)
  14. Figure 2, the graphs can be significantly improved (especially, too large for the edition format of the page) and both diffTB166 and diffTB89 (check the Y-axis title) can be represented together.
  15. Tables 3 and 5’s captions, please homogenize the content and include as much detail as possible.
  16. Lines 275-280, this information could have been explained in methods and justified why it is included in this stage and not from the beginning. Additionally, please, describe in methods the indicators used here to test the performance of the model.
  17. Table 7 contains 7 steps whereas table 6 only presents 5 predictors, is there any gap in the table or missing information?
  18. Section 4.2, please, add some comment on why these events were selected
  19. Discussion, please, could you assess or indicate what threshold intensity is this method sensitive to detect?
  20. Conclusions, lines 497-499, is this applicable to all ranges of snowfall intensities?
  21. Conclusions, please, add what future steps are envisaged to produce a generally applicable methodology in the future, or what major needs still require efforts.
  22. References, I recommend to check thoroughly the edition format to make it uniform under the guidelines of the journal.

Final comment: the retrieval example is the best way to justify and consolidate the conclusions of this analysis; however, a quantitative assessment of this example would definitely bring more light onto the level of accuracy/performance of the methodology. I am aware that this might involve a significantly higher work load, but I leave up to the Authors to consider the inclusion of some quantitative indicators of this in this manuscript.

Author Response

This paper studies the suitability of polarimetric high frequency microwave measurements for snowfall detection over land from a difference-based set of indicators, and compares their performance with the absolute values of others currently used. For this, a ground-based assessment is made from data collected in the USA and the results are further validated by including a qualitative retrieval case. The topic is highly relevant in the current context of both methods and applications, being detection of rain/snow-fall a key issue in the study of the hydrological response, signature, and trends in mountain and cold regions over the world, for which remote sensing is the only feasible option at medium- to large-scales. This detection can bring significant uncertainty to the hydrological calculations in many areas and improving our capability of remote quantification of snowfall occurrence is crucial to increase both the skill and performance of models and predictions.

The research design is sound and the justification of the state-of-the-art and innovation brought by this work is very complete, I have enjoyed the reading of the introduction really much. The assessment of the method against ground data is a valuable issue of the work, and the comparison of indicators is a timely analysis, for whose conclusions the retrieval example provides a supportive point.

I have some minor-moderate comments mainly related to clarify some details in the description of the work that will improve its understanding and applications beyond the results included here. Additionally, the current version of the figures requires some refining (see comments below):

  1. Despite they might be well-known, I recommend that all acronyms are fully presented the first time they appear in the text.

 

Done.

 

  1. Lines 113-114, please, specify how snowfall accumulation as SWE was reported (measured, estimated from snowfall volumes…)

 

Measured (as in rain gauges) from melted snow volume.  Added in text.

 

  1. Lines 121-124, please, clarify a bit further the removal of rain from the no-snowfall sample

Done.

Description of the selection process was modified to make it clear how rain identification and removal was performed. Instead of using the word “flag”, we use the word “classify” to refer to the training data selection of a case. The “flag” is used in the context of the surface weather reporting of the weather type.   Rain in the surface weather reports has its own indicator (flag), so rain cases were easy to select, and once selected, they were removed from consideration. In other words, algorithm was trained with snowfall and no-precipitation (referred to as “no-snowfall ” in the paper) data.

 

 

  1. Some description (global statistics) of the samples of snowfall and no-snowfall events would add relevant information for the readers (number of elements in each sample, etcetera)

 

Done, sample size about 15,000 elements, with 40:60 distribution between the snowfall and no-snowfall cases, added in text.  

 

  1. Lines 127-128, how was this threshold selected?

 

This is the minimum amount recorded by the weather instrument, the possible accuracy resolution, Amounts below this minimum measurable value are reported as “trace”. So, we chose to set the “trace” SWE at the recorded minimum, 0.25 mm/hr-1.      

 

 

  1. Lines 148-149, please, add some reference here.

References are provided [10-12].  

 

  1. Line 160, please, add/clarify how the values of P were obtained to be included in the calibration of parameters in equation (1)

 

Done. We clarify that parameters in equation 1 are estimated from maximum likelihood method. Once determined, then it is easy to compute P from a simple transformation to Eq 1.  

 

  1. Lines 164-168, the point is clear, but some comment on why a threshold value of 0.5 was selected and how this value impact the classification results is needed here.

 

Done, this is an excellent point.  Here, this threshold was selected such as overall classification rate is maximum. Then, in the results, we report the established value of 0.5.

 

 

  1. Line 178 (and 193), is Diff89 a typo? Otherwise, please, define here.

Done, a typo

 

  1. Lines 180-182, again, please, briefly justify the selection of values as thresholds.

 

This was done empirically, by visual inspection.  Consistent with the expected responses, application of the thresholds mostly removed only the coastal cases.  

       

 

  1. Line 188, please, describe in the previous section how SFR was obtained/measured.

SFR simply denotes station measured surface snowfall accumulation in liquid equivalent. We have added text to make this clear.

 

  1. Figure 1’s formatting needs to be improved (refining the font size, axis labels format (integers), symbol for the data, etcetera)

 

Done.

 

  1. Table 1 should be shrinked to match edition format; I suggest to include here also the significance information referred to in the body text (lines 187-195)

 

Significance P-values are excluded to not overwhelm the table, but they are commented when appropriate.

 

  1. Figure 2, the graphs can be significantly improved (especially, too large for the edition format of the page) and both diffTB166 and diffTB89 (check the Y-axis title) can be represented together.

Done, improved.

  1. Tables 3 and 5’s captions, please homogenize the content and include as much detail as possible.

 

Done.  Added lots of explanation and homogenized. We changed the P-value notation. A P-value of 0.0, strictly speaking, does not exit, so we assigned values as <0.01.

 

  1. Lines 275-280, this information could have been explained in methods and justified why it is included in this stage and not from the beginning. Additionally, please, describe in methods the indicators used here to test the performance of the model.

Done.

Main metric for model inter-comparison is the overall classification rate, this is now added in the methods section, as well as definitions of POD, FAR and HSS score.

 

  1. Table 7 contains 7 steps whereas table 6 only presents 5 predictors, is there any gap in the table or missing information?

 

No, Table 6 gives final selected model regression coefficients and significance statistics, whereas Table 7 gives the stepwise selection conducive to the final model as presented in Table 6. Table 7 reveals channel importance and removal in sequential steps, whereas Table 6 presents the final model picture.

 

  1. Section 4.2, please, add some comment on why these events were selected.

Done. We added the following in Section 4.3 “To test the robustness of the algorithm, two significant snowfall events were selected with extensive coverage over different parts of US…”.

 

 

  1. Discussion, please, could you assess or indicate what threshold intensity is this method sensitive to detect?

See below, we did not attempt to quantify this aspect in this paper.

 

  1. Conclusions, lines 497-499, is this applicable to all ranges of snowfall intensities?

 

This was not possible to quantify within the scope of this investigation, but would be subject of future investigation, which we now state in the conclusions section.

 

 

  1. Conclusions, please, add what future steps are envisaged to produce a generally applicable methodology in the future, or what major needs still require efforts.

 

Sensitivity analysis, what type of snowfall is being detected and what is missed.

 

We have added this “Future work is needed to assess the performance of the algorithm in a wide variety of weather conditions and snowfall events before the algorithm can be deemed generally applicable and operationally acceptable“.

 

  1. References, I recommend to check thoroughly the edition format to make it uniform under the guidelines of the journal.

 

Done.

Final comment: the retrieval example is the best way to justify and consolidate the conclusions of this analysis; however, a quantitative assessment of this example would definitely bring more light onto the level of accuracy/performance of the methodology. I am aware that this might involve a significantly higher work load, but I leave up to the Authors to consider the inclusion of some quantitative indicators of this in this manuscript.

As we have stated in the paper, a full-fledged quantitative assessment would be subject to future work. This paper was mainly intended to test/prove the usefulness and the applicability of polarization difference in snowfall detection algorithms.

 

Reviewer 3 Report

This paper studied the potential of using high frequency microwave measurements at vertical and horizontal polarizations for detecting snowfall and it is an interesting work. Here are a few questions and comments:

  1. page 2, line 64, was 'ate' in the sentence the riight word?
  2. please clarify the statistical measures used in the paper, such as P-value etc, with equations or appropriate references
  3. table I, the SFR column is messy, please re-draw the table.
  4. page 3, section 2.3, please move the website details to footnote or reference section.

Author Response

This paper studied the potential of using high frequency microwave measurements at vertical and horizontal polarizations for detecting snowfall and it is an interesting work. Here are a few questions and comments:

  1. page 2, line 64, was 'ate' in the sentence the riight word?

Done.

  1. please clarify the statistical measures used in the paper, such as P-value etc, with equations or appropriate references

We have clarified in the methods section the statistical metric used for measuring performance (overall classification), other metrics such as POD, FAR and HSS, and significance testing. We also substantially expanded the captions of the Tables to explain the metrics and results used, but did not venture to explain the basic statistical concepts, e.g., The P-value, since these are were thought and deemed basic/necessary for experts.

  1. table I, the SFR column is messy, please re-draw the table.

Done!

  1. page 3, section 2.3, please move the website details to footnote or reference section.

Done!

 

Back to TopTop