Next Article in Journal
Dynamic Simulation of Land Use/Cover Change and Assessment of Forest Ecosystem Carbon Storage under Climate Change Scenarios in Guangdong Province, China
Next Article in Special Issue
Empirical Models for Estimating Air Temperature Using MODIS Land Surface Temperature (and Spatiotemporal Variables) in the Hurd Peninsula of Livingston Island, Antarctica, between 2000 and 2016
Previous Article in Journal
Investigating the Magnetotelluric Responses in Electrical Anisotropic Media
Previous Article in Special Issue
Assessment of Land Surface Temperature Estimates from Landsat 8-TIRS in A High-Contrast Semiarid Agroecosystem. Algorithms Intercomparison
 
 
Article
Peer-Review Record

A Comprehensive Clear-Sky Database for the Development of Land Surface Temperature Algorithms

by Sofia L. Ermida 1,2,* and Isabel F. Trigo 1,2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Submission received: 15 February 2022 / Revised: 25 March 2022 / Accepted: 6 May 2022 / Published: 11 May 2022
(This article belongs to the Special Issue Remote Sensing Monitoring of Land Surface Temperature (LST) II)

Round 1

Reviewer 1 Report

In this work, a database for training and verifying LST retrieval algorithms is presented. The authors make a valid argument why this database is needed and explain in detail how it improves previous ones. I have only some minor comments that I would like to ask you to clarify. I also recommend you do another proof-reading because there are a lot of typos.

Comments:

40-44: This sentence is quite long. I suggest you split it in two or more parts.

128: By regular you mean uniform? If not, please specify.

220: At each iteration, how many new profiles are evaluated?

230: What is d and how do you adjust it in each iteration?

286: There are several cases where the compared profiles exhibit completely different behavior in the lower troposphere (e.g., Figure 4b, 4c, 4e). How do you explain this? In what way can this impact the LST retrieval?

333: The grid size of these instruments is much finer than that of ERA5. For the collocation, did you average all the LST observations inside each ERA5 cell or used the nearest pixel?

341-343: Does this mean that the actual size of the database is less than 1000 +/- 10 samples per TCWV class?

348-349: Why is this needed? How do you decide the “size” of these five equally spaced values?

348-349: I cannot understand how this sentence relates to the next one (“As a result, …”).

Figure 6: The LST are from which instrument?

 

Typos:

44: a range possible -> a range of possible

72: address -> addressed

88: the are consistent -> that are consistent

94: condensate -> condensation

96: cloud a precipitation -> cloud precipitation

129: temperature a specific humidity -> temperature and specific humidity

139: The This calibration -> The calibration

186: where -> were

208: my -> may

362: there is a reference error.

429: the central wavelength for band 10 is not correct.

542: Carlos -> Carlo

Author Response

We would like to thank the reviewer for the thorough review and useful suggestions. We provide point-by-point answers below and have updated the text to clarify these questions. We also apologize for all the typos that have passed our text review.

Comments:

40-44: This sentence is quite long. I suggest you split it in two or more parts.

Thank you for the suggestion. We have split the sentence in two parts.

128: By regular you mean uniform? If not, please specify.

We have replaced the term “regular” by “uniform”. We agree that it is clearer this way.

220: At each iteration, how many new profiles are evaluated?

Only one profile is selected at each iteration. We have changed the text to clarify this.

230: What is d and how do you adjust it in each iteration?

“d” is the acceptable minimum distance between two profiles in the calibration database. Because this distance depends also on the distribution of the humidity and temperature profiles, d must be updated for each TCWV/Tskin class in order to obtain the desired sample size. We first define a random reasonable d value, and then repeat the selection process (steps 1-3) until the sample size is achieved. At each iteration d is increased/decreased if the final sample size is larger/smaller then the target. We have changed the text in section 2.3 to clarify this procedure.

286: There are several cases where the compared profiles exhibit completely different behavior in the lower troposphere (e.g., Figure 4b, 4c, 4e). How do you explain this? In what way can this impact the LST retrieval?

Indeed, the distribution of profiles is quite different in the lower troposphere. There are three main factors that we believe contribute to this difference: 1) the improved vertical resolution of ERA5 when compared to the data sources used by SeeBor (NOAA-88, TIGR-3, ERA-40); 2) the wider range of conditions provided by ERA, which stems from more realistic representation of atmospheric profiles (partially associated with the previous point) and surface variables when compared to Seebor; 3) the selection criteria that enforces a more uniform distribution of the profiles within the database. This discussion was included in section 3.3.

We expect that the improved representation of the high variability of possible atmospheric conditions in the calibration should significantly improve the quality of retrieval models. Most importantly, we expect them to have improved quality for more extreme conditions that were not well represented before. We have also included an assessment of the impact of the new database on a retrieval algorithm (section 4).

333: The grid size of these instruments is much finer than that of ERA5. For the collocation, did you average all the LST observations inside each ERA5 cell or used the nearest pixel?

As mentioned in section 2.2: “All datasets are projected onto the ERA5 0.25ox0.25o grid by simple average of all pixels within each grid-box.”

341-343: Does this mean that the actual size of the database is less than 1000 +/- 10 samples per TCWV class?

The actual size of the database for each TCWV class in much higher than 1000. A target is set of 1000 +/- 10 samples for each TCWV/Tskin class (so for TCWV there are multiple Tskin classes with ~1000 samples). The total number of profiles selected for each class is shown in Figure 1. Then, multiple values of Tskin and emissivity are attributed to each profile. Ultimately, the number of sets of profiles/Tskin/emissivity in each TCWV/Tskin class will be approximately 1000x5x25.

348-349: Why is this needed? How do you decide the “size” of these five equally spaced values?

This is performed to increase the representativeness of the database in terms of surface conditions and, at the same time, avoid propagating errors in the ERA5 Tskin to our database. Figure 6 clearly shows a systematic overestimation of the LST for low temperature values. Several authors have also pointed out a tendency of the ERA5 dataset to overestimate nighttime Tskin values and to underestimate the Tskin at the peak of the day. By incorporating information from satellite LST, we mitigate the impact of such limitations on the calibration procedure. This is discussed in section 3.4.1, lines 333-339.

To define the range of LST, we define 5 equally spaced values between the 5th and 95th percentiles. The spacing between these values will depend on the percentile values. The number of values considered is not critical, the most important part is to increase the effective range of LSTs represented in the database.

348-349: I cannot understand how this sentence relates to the next one (“As a result, …”).

From the distribution of the LSTs we take 5 values (equally spaced between the 5th and 95th percentiles). Consequently, 6 Tskin values are attributed to each profile in the database (5 from the LSTs and 1 from ERA5).

Figure 6: The LST are from which instrument?

These are distributions of LST-Tskin for all the instruments described in the data section 2.2, namely SEVIRI, GOES, Himawari and AVHRR.

Reviewer 2 Report

The manuscript 'A comprehensive clear-sky database for the development of Land Surface Temperature algorithms' submitted by S.L. Ermida and I.F. Trigo presents a newly generated database that contains globally representative atmospheric temperature and humidity profiles and land surface conditions. Based on a dissimilarity criterion, the profiles were selected from the ERA5 data set and combined with realistic ranges of surface conditions obtained from satellite data. The main motivation behind the authors' efforts has been to exploit the considerably improved horizontal, vertical and temporal resolutions of the ERA5 dataset to obtain a consistent, globally representative calibration database for LST retrieval. The obtained database includes the selected atmospheric profiles, the corresponding, vertically integrated parameters and realistic ranges of LST and emissivity associated with each profile. The new database reduces the biases associated with some of the existing databases (e.g. Seebor) and, through the use of satellite data for specifying the surface parameter ranges, provides more realistic LST ranges under certain surface conditions, e.g. savanna-like landscapes and snow / ice. The used materials and methods are well described and the conclusions drawn by the authors are plausible. The developed globally representative calibration database has been made freely available to the scientific community and provides researchers worldwide with a state-of-the-art, globally representative training data set for developing and improving LST retrieval algorithms. The manuscript is well written and structured and, after some minor revisions, is recommended for publication.

 

Minor suggestions and corrections:

line 8: ... linked to a ...

line 16: ... temperature and ...

line 27: Surface processes ... linked to land surface ...

line 28: ... it is a fundamental parameter of the surface ...

line 35: ... [18-22], and ...

line 40: ... observations that ...

line 43-44: ... of a possible range of atmospheric and surface conditions. The sampling ...

line 63: ... models fitted ...

line 71-72: ... been addressed ...

line 79: please check if it should say '1998' or 1988'

line 88: ... variables that are ...

line 96: ... cloud and precipitation ...

line 129: ... temperature and ...

line 139: The calibration ...

line 208: ... atmospheres may ...

line 211: ... et al. use ...

line 212: 'subset' or 'subsample'?

line 213: 'recognition' or 'criterion' (or nothing)?

 

equations 1-3 should be larger

 

Figure 2: these plots are rather small. Consider showing only half of them as larger plots and moving the rest into an appendix.

 

line 265: ... much larger area ...

 

Figure 3: does the y-axis show relative frequency or is there a factor missing?

 

line 307: ... likely to provide ... representation of ...

 

Figure 4: these plots are rather small (legends and labels are difficult to read). Consider showing only half them as larger plots and moving the rest into an appendix.

line 311: Distribution of ... (since 'profiles' already implies vertical)

line 314: The distribution of the Seebor profiles ...

 

lines 315-316: ... Tskin frequently still have ...

 

Figure 5: please use the same range for the y-axes of a) and b)

Caption figures 5, 6, 9, 10, 11: the 'dashed lines' in box plots are also referred to as 'whiskers'

 

line 348: ... (bottom panel, in orange and red): are you sure about the colors? However, I find the colors in the plots difficult to see.

 

line 362: there is broken reference, presumably to Figure 7. However, I also find it hard to see a difference between Figure 7 and Figure 5.

Figure 7: please make this the same size as figure 5.

 

line 397: ... seen in spectral ...

line 429: ... 10 (centered at 12.0 micron)

line 431-432: ... performed for the corresponding simulated BTs largely applies ...

line 435: ... between them.

line 440-441: ... increases compared to ...

line 508: criterion?

line 518: Data from the ERA5 are ...

line 520: ... are used to specify a more realistic and larger ...?

line 522-523: ... the impact of possible ... ERA5 dataset on algorithm calibration. ... [8] identified ...

line 530: ... to realistically describe ...?

line 533-534: ..., the wider range ... values is relevant when considering the between ... temperature.

line 541-542: We also briefly discuss some re-sampling ... exercises with this new dataset. Monte Carlo or ...

line 545: ... any resampling ...

line 553: ... to include water surfaces and ...

line 556: ... from a wide and realistic ...

 

Author Response

We would like to thank the reviewer for the positive comments and all the suggestions. We have incorporated all the corrections suggested. We’ve changed the colors of Figures 3, 6 and 8 to make them easier to read. Regarding Figures 2 and 4, we’ve opted to keep the figures with all panels. Please note that although they might be a bit difficult to read on a word file, the readers will have access to high resolution figures on the website which allows zooming in on the figure. We also noticed that there was a mistake generating Figure 5, and indeed it was a repetition of Figure 7 but with changed legend.

Reviewer 3 Report

In this study, a comprehensive clear-sky database is proposed for LST retrieval algorithm, and the results are analyzed from five aspects. . A variety of model data and satellite data are used in the study, but the specific selection method description is too simple. Therefore, the current form of the manuscript is not satisfactory for publication in Remote Sensing and needs to send back for minor revision. I have following comments those should be addressed by the authors to improve the quality of the manuscript.

  • Section 2.3 should be described in more detail.
  • This study should add the LST retrieval validation results using this database, and compare with the validation results using SeeBor database.
  • Line362: Reference error.
  • Line429: The central wavelength description is incorrect.

Author Response

We would like to thank the reviewer for the positive comments. We have taken into account the suggestion and more details have been included in section 2.3. We have also included an analysis of the impact of the new database on an algorithm of LST retrieval (section 4).

Reviewer 4 Report

This study provides a comprehensive atmospheric database for developing or calibrating LST retrieval algorithms that rely on atmospheric profiles. As reviewed in this manuscript, a lot of atmospheric profile databases such as TIGR, Seebor, Noaa88, etc., have been developed and widely employed for developing the LST retrieval algorithms. How to demonstrate the necessity of developing such a database is the key point of this study since a couple of operational LST retrieval algorithms have been developed using the traditional atmospheric databases and achieve acceptable LST retrieval accuracy. Such a comparison study is needed to illustrate the efficacy of this new atmospheric database. Otherwise, it may interfere LST algorithm developers when facing how to choose an optimal atmospheric database.

Author Response

We would like to thank the reviewer for providing this feedback. We hope we can make the added value of our database clearer. Using profiles from the most recent ERA5 reanalysis already represents a significant advancement, since ERA5 provides better temporal, spatial and vertical resolution and the state-of-the-art modeling systems.  Our analysis shows that the new database provides a much wider range of atmospheric profiles (Figure 4). ERA5 also provides a better representation of the troposphere, which is critical since the TIR signal mostly originates from this region. Our analysis also shows significant improvement of the representation of atmospheric conditions in the troposphere (Figure 4). We further show that the representation of surface conditions, namely through temperature and emissivity, is also significantly improved (Figures 5 and 10, please not that there was a mistake generating Figure 5 in the original manuscript). All these lead to a much wider range of brightness temperature values being included in algorithm calibration (Figure 11), which results in models that are capable of capturing much more realistic atmospheric and surface conditions. Nevertheless, we further included in the paper an analysis of the impact of the new database on the calibration of a generalized split-window algorithm. Results show that the performance of the model calibrated with the proposed database is generally improved, and the model fit is much more robust. The SeeBor database reveals a high sensitivity to the selected subset of profiles used for training. A discussion on this topic was added to section 4.

Back to TopTop