Next Article in Journal
Mapping Coastal Dune Landscape through Spectral Rao’s Q Temporal Diversity
Previous Article in Journal
A Robust Method for Generating High-Spatiotemporal-Resolution Surface Reflectance by Fusing MODIS and Landsat Data
 
 
Article
Peer-Review Record

Cloud Observation and Cloud Cover Calculation at Nighttime Using the Automatic Cloud Observation System (ACOS) Package

by Bu-Yo Kim * and Joo Wan Cha
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Submission received: 27 June 2020 / Revised: 16 July 2020 / Accepted: 17 July 2020 / Published: 18 July 2020
(This article belongs to the Section Atmospheric Remote Sensing)

Round 1

Reviewer 1 Report

The authors have responded to most of my previous comments carefully. However, there is still the concern of the objectivity and uncertainty estimates of this retrieval method. The authors aim to design an objective cloud cover retrieval method, however, the key thresholds values (Fig. 3) are mostly determined subjectively. The authors claim that the retrieved cloud cover does not vary much as long as the changes to these values do not exceed 10%. It would be more reassuring if such claims can be backed by analysis that can be clearly demonstrated to the readers. For example, an easy fix is to subdivide the dataset randomly into two parts, and use one of them as training dataset to determine the critical threshold values, and use the other one to evaluate the performance of the this set of threshold values.

I understand the argument that the results show that the ACOS retrievals are somewhat comparable to human observers and it is useful to use this method for automated cloud cover observations elsewhere. However, the lack of evaluation with an independent objective dataset means the quality of this retrieval is still unknown, which greatly compromises its value for both weather and climate applications. Some sanity check with satellite observations can be useful. For example, the gridded monthly cloud product from either MODIS or CloudSat/Calipso can be very easy to use. Although they do not provide the direct comparison to individual images, they can tell if there are substantial system biases in ACOS and DROM cloud retrievals.

Again, I still consider this a very interesting study and have the potential to provide reliable automated nighttime cloud observations. However, addressing these concerns are important to make this manuscript ready for publication.

 

Minor edits:

Line 52: “i.g.,” to “e.g.,”.

Author Response

Thank you again for your consideration for reviewing this manuscript. Following the reviewer's comments, we have added the contents to the revised manuscript as detailed below. We believe that the quality of the manuscript has been improved and clarified through the reviewer's comments.

 

The authors have responded to most of my previous comments carefully. However, there is still the concern of the objectivity and uncertainty estimates of this retrieval method. The authors aim to design an objective cloud cover retrieval method, however, the key thresholds values (Fig. 3) are mostly determined subjectively. The authors claim that the retrieved cloud cover does not vary much as long as the changes to these values do not exceed 10%. It would be more reassuring if such claims can be backed by analysis that can be clearly demonstrated to the readers. For example, an easy fix is to subdivide the dataset randomly into two parts, and use one of them as training dataset to determine the critical threshold values, and use the other one to evaluate the performance of the this set of threshold values.

We performed sensitivity tests by changing the image classification conditions and RBR thresholds. We have added the following sentence to the revised manuscript:

"The following results were obtained when the image classification conditions (> and ) and RBR thresholds (T=0.75, 1.00, ) determined in this study were changed within ±10% within 1% intervals. Only up to approximately 9.85% of all cases, image classification results were changed, while the remaining cases showed the same results. In this case, the bias of DROM and ACOS was -0.53 tenth, RMSE was 2.11 tenths, and the correlation coefficient was 0.87; these did not show a significant difference from the results shown in Section 4. That is, the numerical values determined in this study were set as image classification conditions and thresholds sufficient to classify images and calculate cloud cover.”

 

I understand the argument that the results show that the ACOS retrievals are somewhat comparable to human observers and it is useful to use this method for automated cloud cover observations elsewhere. However, the lack of evaluation with an independent objective dataset means the quality of this retrieval is still unknown, which greatly compromises its value for both weather and climate applications. Some sanity check with satellite observations can be useful. For example, the gridded monthly cloud product from either MODIS or CloudSat/Calipso can be very easy to use. Although they do not provide the direct comparison to individual images, they can tell if there are substantial system biases in ACOS and DROM cloud retrievals.

We compared and analyzed human observation data of DROM with reference to the monthly average cloud data of Terra/Aqua MODIS. The results were added in the manuscript follow as:

“For reference, when comparing the monthly average data of day- and nighttime in MODIS of Terra/Aqua satellite [44] and DROM (www.weather.go.kr), the average cloud covers of MODIS and DROM were 6.61 and 5.18 tenths, exhibiting a difference of 1.42 tenths. RMSE was 1.50 tenths and the correlation coefficient between both cloud covers was 0.88. In the case of calculating the daytime cloud cover using the ground-based imager, it was studied in the previous studies [6,18] that bias did not exceed 0.5 tenth between calculated by imager and observed cloud cover.”

 

Again, I still consider this a very interesting study and have the potential to provide reliable automated nighttime cloud observations. However, addressing these concerns are important to make this manuscript ready for publication.

Minor edits:

Line 52: “i.g.,” to “e.g.,”.

We have revised “i.g.,” to “e.g.,”.

Reviewer 2 Report

Review of “Cloud observation and cloud cover calculation at nighttime using the Automatic Cloud Observation System (ACOS) package” by Bu-Yo Kim and Joo Wan Cha


In the manuscript, an approach for nighttime total cloud cover (TCC) retrieval is presented. The main focus of the presented study is the automatic optical package named “ACOS” (stands for Automatic Cloud Observation System) capable of making cloud imagery at nighttime due to the camera Canon EOS 6D sensitivity up to ISO 25600 and long exposure (5s).


The manuscript has an easy-to-read structure, and it is written in a clear manner. The presented study looks consistent. The English of the text is easy to understand.


The authors should pay more attention to their introduction when they overview the progress made by other groups in the field of automated cloud observations. More precisely, in lines L38-41, they mention that “cloud cover data (octa or tenth) has only been recorded through the eyes of a human observer, “ which does not reflect the actual state of things. On the contrary, several publications are mentioning existing devices and algorithms for cloud cover retrieval [1–4] including the ones developed in the US in the collaboration within the US Department of Energy’s Atmospheric Radiation Measurement (ARM) Program and resulting in a commercial product named TSI [5,6]. The phrase in L38-41 also seems to be inconsistent with the overview provided further in L54-56, which demonstrates that the authors are well aware of the existing studies involving cloud-cameras.


In Section 3.1, the authors describe the preprocessing of the correction of distortions introduced by the fish-eye lens of the camera. There are three kinds of fish-eye lenses: stereographic, equidistant (equiangular), and orthographic. It is worth mentioning the type of the lens since a researcher may want to reproduce the results of the study using the same formulas (1-3). In case he/she has a fish-eye lens of a different type, the correction of the distortion with these formulas may cause biased results.


In L176-178, the authors still stand for the necessity of the correction of distortions from common sense, without any proof or hypotheses testing. I would, again, disagree at this point. Perhaps, it is worth mentioning that this statement is not proven in any way statistically, and supported by the common sense only, which is not a scientific way of reasoning.


In L195, the authors propose the value of the R correction the following way: “a value of 25 was chosen to be ideal”. There is no explanation of how did they come to this value, and in what terms is that value “ideal.” In case somebody would like to reproduce the study, there is no clue on which criterion to use while estimating the “ideal” value of this correction.


The primary issue of the presented study in the present form is the lack of an explanation of why does the optical camera capture images at nighttime at all. In L67-68, the authors stated that there is no light source at night. In this case, no camera would be able to capture clouds or clear sky no matter how high it would be its ISO or shutter speed.

 

Therefore, there must be some kind of light source. I would suggest the authors elaborating on this point. Otherwise, it is unclear why the ACOS does work at all.


In case there IS an artificial light source, it is unclear how do the authors distinguish the artificial light source (needed for the ACOS to work) and the light pollution mentioned, for example, in L201-202.


In the caption of Figure 5, it is unclear what does N stand for.

Author Response

Review of “Cloud observation and cloud cover calculation at nighttime using the Automatic Cloud Observation System (ACOS) package” by Bu-Yo Kim and Joo Wan Cha

In the manuscript, an approach for nighttime total cloud cover (TCC) retrieval is presented. The main focus of the presented study is the automatic optical package named “ACOS” (stands for Automatic Cloud Observation System) capable of making cloud imagery at nighttime due to the camera Canon EOS 6D sensitivity up to ISO 25600 and long exposure (5s).

The manuscript has an easy-to-read structure, and it is written in a clear manner. The presented study looks consistent. The English of the text is easy to understand.

Thank you again for your consideration for reviewing this manuscript. Following the reviewer's comments, we have added and revised the contents to the revised manuscript as detailed below. We believe that the quality of the manuscript has been improved and clarified through the reviewer's comments.

 

The authors should pay more attention to their introduction when they overview the progress made by other groups in the field of automated cloud observations. More precisely, in lines L38-41, they mention that “cloud cover data (octa or tenth) has only been recorded through the eyes of a human observer, “ which does not reflect the actual state of things. On the contrary, several publications are mentioning existing devices and algorithms for cloud cover retrieval [1–4] including the ones developed in the US in the collaboration within the US Department of Energy’s Atmospheric Radiation Measurement (ARM) Program and resulting in a commercial product named TSI [5,6]. The phrase in L38-41 also seems to be inconsistent with the overview provided further in L54-56, which demonstrates that the authors are well aware of the existing studies involving cloud-cameras.

We have revised the sentence follow as:

“Even though cloud cover represents the main observation data for global weather and climate, it is one of the meteorological variables for which ground-based automatic observations have not yet been performed in several countries, including South Korea. In other words, cloud cover data (octa or tenth) has only been recorded through the eyes of a human observer; thus, it is based on the subjective judgment of the observer [8,9].”

 

In Section 3.1, the authors describe the preprocessing of the correction of distortions introduced by the fish-eye lens of the camera. There are three kinds of fish-eye lenses: stereographic, equidistant (equiangular), and orthographic. It is worth mentioning the type of the lens since a researcher may want to reproduce the results of the study using the same formulas (1-3). In case he/she has a fish-eye lens of a different type, the correction of the distortion with these formulas may cause biased results.

The fisheye lens used in this study is a lens with barrel (negative) distortion. Therefore, three distortion correction methods suggested by the reviewer can be applied. However, differences may occur depending on the distortion correction method. In particular, the difference is large in the edge region of the image. Normally, for a 180° fish-eye lens, correction is made using the orthographic distortion equation (Section 3.1). Further, we added the model name of the fish-eye lens (EF8-15 F/4L fisheyes USM) in this manuscript for the readers.

 

In L176-178, the authors still stand for the necessity of the correction of distortions from common sense, without any proof or hypotheses testing. I would, again, disagree at this point. Perhaps, it is worth mentioning that this statement is not proven in any way statistically, and supported by the common sense only, which is not a scientific way of reasoning.

We have revised this sentence follow as:

“Therefore, when images captured using a fish-eye lens are used, it is necessary to calculate the cloud cover after producing images similar to those the actual observer sees by performing distortion correction [18,19].” to “Therefore, when images captured using a fish-eye lens are used, it is necessary to correct the relative size of clouds by performing distortion correction [18,19].”

 

In L195, the authors propose the value of the R correction the following way: “a value of 25 was chosen to be ideal”. There is no explanation of how did they come to this value, and in what terms is that value “ideal.” In case somebody would like to reproduce the study, there is no clue on which criterion to use while estimating the “ideal” value of this correction.

We have added the sentence follow as:

“The adjusted R brightness 25 is the average brightness within 10° of the center of the image of the clearest sky (the average RGB brightness in the image was the smallest; 0300 UTC on January 3, 2019).”

 

The primary issue of the presented study in the present form is the lack of an explanation of why does the optical camera capture images at nighttime at all. In L67-68, the authors stated that there is no light source at night. In this case, no camera would be able to capture clouds or clear sky no matter how high it would be its ISO or shutter speed.

Therefore, there must be some kind of light source. I would suggest the authors elaborating on this point. Otherwise, it is unclear why the ACOS does work at all.

We have revised and added this sentence follow as:

“At night, on the other hand, it is difficult to distinguish clouds from the sky because there is no light source such as sun. When they are completely blocked from the surrounding light source, it is difficult to detect nighttime clouds with visible channel information (i.e., RGB brightness). In other words, clouds are detected through lights scattered from the light source (such as street lamps around the fields, or the lights from buildings).”

 

In case there IS an artificial light source, it is unclear how do the authors distinguish the artificial light source (needed for the ACOS to work) and the light pollution mentioned, for example, in L201-202.

We have revised the sentence follow as:

“In the edge region of the image, the luminance is large, even in the cloud-free region. It can be increased from atmospheric turbidity (aerosol, haze, etc.) as well as the light pollution (by street lamps around the fields, or the lights from buildings) around the observation equipment [8,22,40,43].”

The street lamps are exposed to the outdoor environments, heavily polluting the ACOS image. Therefore, all of these contaminated areas were removed from the images using the algorithm under standard deviation (std) 10 (Section 3.1).

 

In the caption of Figure 5, it is unclear what does N stand for.

We have described the N in the caption of Figure 5.

Reviewer 3 Report

This paper presents an Automatic Cloud Observation System and nighttime cloud cover calculation algorithm based on a digital camera. I recommend this paper to be published after minor revision.

1. Line 133: RBR should be defined for the first time it is used in the main text.

2. Section 3.2: It is quite difficult to follow the methodology described in section. This section should be rephrased to descript the nighttime algorithm more clearly and logically.

3. Figure 3: What are Ye and Yc? How to calculate them?

4. Lines 213-215: How did the authors determine those thresholds?

5. Lines 262-264: ‘but there were large differences between ACOS and human observations for cases with low cloud cover. This is because large calculation errors occurred in the sunrise and sunset images’. The sunrise and sunset images can be screen out by, for example, using a threshold of sun zenith angle. How does the comparison results look like if removing results of those sunrise and sunset images?

6. Figure 7 shows that  ACOS detects no clouds due to the influence of sunrise/sunset when DROM reports large amount of clouds. Figure 5a also shows that sometimes DROM reports no clouds but ACOS detects quite amount of clouds. What is the reason for this?

Author Response

This paper presents an Automatic Cloud Observation System and nighttime cloud cover calculation algorithm based on a digital camera. I recommend this paper to be published after minor revision.

Thank you again for your consideration for reviewing this manuscript. Following the reviewer's comments, we have added and revised the contents to the revised manuscript as detailed below. We believe that the quality of the manuscript has been improved and clarified through the reviewer's comments.

 

1. Line 133: RBR should be defined for the first time it is used in the main text.

We have defined the RBR on line 133.

 

2. Section 3.2: It is quite difficult to follow the methodology described in section. This section should be rephrased to descript the nighttime algorithm more clearly and logically.

3. Figure 3: What are Ye and Yc? How to calculate them?

We have added the sentence and revised the caption of Figure 3 as follows:

“The adjusted R brightness 25 is the average brightness within 10° of the center of the image of the clearest sky (the average RGB brightness in the image was the smallest; 0300 UTC on January 3, 2019).”

“The average RGB brightness and RBR is  and , respectively. The average luminance of edge (SZA 60-80°) and center (SZA 0-20°) region is  and , respectively.  is the average RBR of the center region. T is RBR threshold.”

 

4. Lines 213-215: How did the authors determine those thresholds?

As shown in Figure 4, RBR and luminance characteristics were analyzed using a few images, and strict image classification conditions and RBR thresholds were given for all cases to calculate cloud cover.

We performed sensitivity tests by changing the image classification conditions and RBR thresholds.

We have added the sentence follow as:

"The following results were obtained when the image classification conditions (> and ) and RBR thresholds (T=0.75, 1.00, ) determined in this study were changed within ±10% within 1% intervals. Only up to approximately 9.85% of all cases, image classification results were changed, while the remaining cases showed the same results. In this case, the bias of DROM and ACOS was -0.53 tenth, RMSE was 2.11 tenths, and the correlation coefficient was 0.87; these did not show a significant difference from the results shown in Section 4. That is, the numerical values determined in this study were set as image classification conditions and thresholds sufficient to classify images and calculate cloud cover.”

 

5. Lines 262-264: ‘but there were large differences between ACOS and human observations for cases with low cloud cover. This is because large calculation errors occurred in the sunrise and sunset images’. The sunrise and sunset images can be screen out by, for example, using a threshold of sun zenith angle. How does the comparison results look like if removing results of those sunrise and sunset images?

We have added the sentence follow as:

“When excluding 139 cases on these issues, the average cloud cover of ACOS and DROM were 4.66 and 4.84 tenths, exhibited a bias of -0.17 tenth, RMSE of 1.52 tenths, and correlation coefficient of 0.94.”

 

6. Figure 7 shows that ACOS detects no clouds due to the influence of sunrise/sunset when DROM reports large amount of clouds. Figure 5a also shows that sometimes DROM reports no clouds but ACOS detects quite amount of clouds. What is the reason for this?

If the question for Figure 7 is correct, in both cases, DROM observed the clouds, but ACOS calculated cloud cover to 0 tenth.

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

Review of Kim and Cha

The authors constructed an automatic cloud observation system (ACOS) and developed an algorithm to objectively retrieve nighttime cloud cover. The ACOS retrievals are compared to cloud cover observations from human observers at the same site. In general, the two datasets agree well with each other with some discrepancies under some sky conditions. The nighttime cloud cover retrieval by such sky imagers is important for continuous autonomous cloud observations at ground sites, especially when no addition instrument is required. However, my main concern is that the basis for the choice of boundary values is not clearly explained. It is not obvious whether this choice is objective. In addition, the sensitivity of the retrievals to this choice is not discussed, which is necessary to shown the robustness of this algorithm. I recommend a major revision to this manuscript before it is accepted for publication.

Major concerns
1. The writing of the manuscript, especially the first two paragraphs of the introduction section, need substantial improvement. I understand the authors are not native speaker of English so getting help from native speakers can greatly improve the clarity of presentation of this manuscript.

2. The basis for the choices of boundary values of 0.75, 1 and the scheme of mean RBR×0.8 are not clearly explained. The sensitivities of the resulting cloud cover retrievals to the choice of these boundary values are not shown either. In order to show the nighttime sky imager cloud cover is objective, such choices of boundary values and schemes have to be established using a trustworthy and independent cloud cover dataset. If the cloud cover from human observers are used, then the training dataset and the dataset used for evaluation should be considered separately. If such practice has been carried out already, please describe it more clearly in the manuscript. Currently, the retrieval method is very interesting and promising, but it needs to be constructed more clearly and carefully to be meaningful and to  convince the readers that this algorithm is actually objective.

Suggestion:

Have you examined the spatial scale of the cloud cover variability around the site? If the cloud cover variability is mostly homogeneous around the site, say in a radius of tens of kilometers, why not use satellite cloud cover as another independent objective cloud cover dataset to evaluate the sky imager retrievals? I think this might be something to consider for your future studies.

Minor comments:

Line 28: “radiant” to “radiative”

Line 30-32: This sentence is too complicated and its meaning is not clearly expressed. Please rephrase.

Line 32-34: I may guess what this sentence is trying to say but it is very confusing. Do you mean clouds reduces the solar radiation that reaches the surface?

Line 36: “production of cloud information” to “observations of clouds”?

Line 38-39: This is not true. As the authors mentioned later in line 52-59, many ground-based remote sensing instruments provide automated cloud cover observations. A lot of them have been part of the routine observations at these sites for more than two decades.

Line 51-52: I assume the authors are referring to the passive sensors, such as the ones on MODIS. But the spatial resolution depends on the specific instruments. Some active sensors are on polar orbiters as well, and they have a much higher spatial resolution.

Line 70,72: “region” to “wavelength”?

Fig.2: The cloud cover tenth values in the subplot titles do not make sense and are not consistent with the caption.

Line 160-162: What is the projection model of this fish eye lens? Equidistance? Have you done any calibration of the lens, such as with chessboard-like process?

Line 203: How does “southern direction” translate in the x-y grid of Fig. 5?

Line 211 to 215: How do you determine the boundary values of 1 and 0.75? If they very image by image, are they determined objectively? Is the RBR×0.8 in Fig. 4 the way to determine the boundary values? How do you determine to adopt this threshold? How sensitive are your results to this choice? Right now this is a linear fit through the origin. What if the linear fit does not go through the origin, for example using RBR×0.75+.003? What happens if a nonlinear fit is used?

Fig. 6 to 7: Because the nature of cloud cover, the frequency occurrence of clear sky and overcast are much larger than in other bins. The variability in other bins are not obvious with the current colorbar. Maybe compare the frequency occurrence/histograms in each cloud cover bins of the two observations directly as well?

 

 

Author Response

The authors constructed an automatic cloud observation system (ACOS) and developed an algorithm to objectively retrieve nighttime cloud cover. The ACOS retrievals are compared to cloud cover observations from human observers at the same site. In general, the two datasets agree well with each other with some discrepancies under some sky conditions. The nighttime cloud cover retrieval by such sky imagers is important for continuous autonomous cloud observations at ground sites, especially when no addition instrument is required. However, my main concern is that the basis for the choice of boundary values is not clearly explained. It is not obvious whether this choice is objective. In addition, the sensitivity of the retrievals to this choice is not discussed, which is necessary to shown the robustness of this algorithm. I recommend a major revision to this manuscript before it is accepted for publication.

We appreciate the valuable opinions and comments of the reviewer, thanks to which we could revise our manuscript to largely enhance its quality. We have made the following revisions in accordance with the additional comments of the reviewer.

 

Major concerns

  1. The writing of the manuscript, especially the first two paragraphs of the introduction section, need substantial improvement. I understand the authors are not native speaker of English so getting help from native speakers can greatly improve the clarity of presentation of this manuscript.

We have revised this sentence follow as:

“Clouds, which represent approximately 70% of the area of the earth, continually change the balance of radiative energy, resulting in meteorological changes that affect the hydrologic cycle, local weather, and climate. Their interaction with aerosols results in global cooling as well as the greenhouse effect owing to changes in their physical characteristics. Such interactions have an effect on precipitation efficiency, which in turn affects vegetation and water resource security [1–3]. In addition, the cloud cover and cloud type are meteorological elements that increase uncertainty in predicting climate and meteorological phenomena because they perform the dual actions of reflecting the solar radiation and absorbing the earth radiation reflected from the ground surface [4,5]. Therefore, the observations of clouds, including high-frequency, high-accuracy cloud cover and cloud type, is required as input data for radiation, weather forecast, and climate models [6,7].

Even though cloud cover represents the main observation data for global weather and climate, it is one of the meteorological variables for which ground-based automatic observation has not yet been performed, i.e., cloud cover data (octa or tenth) has only been recorded through the eyes of a human observer; thus, it is based on the subjective judgment of the observer [8,9]. Therefore, cloud cover data is often characterized by so much variability, depending on the observer. Particularly, observation data recorded by an unskilled observer will contain several errors, which degrade the quality of the observation data [2,6]. In other words, the observation data lack objectivity. During the day, human cloud observation is performed at 1-hour intervals; however, at night, it is only performed at 1–3 hours intervals, depending on the work cycle of the observer and the weather conditions. Therefore, to complement the lack of objectivity and improve the observation period, which are the major shortcomings of human observation, meteorological satellites and surface remote observation data can be used [4,10]. In the case of geostationary satellites, it is difficult to identify the horizontal distribution or shape of clouds because their spatial resolution is large (2 km  2 km) even though their observation period is short (2-10 minutes). In the case of polar-orbiting satellites, there is a relatively high spatial resolution of 250 m  250 m (i.g., MODIS), but they can only detect the same point twice a day [3,11,12]. Ground-based remote sensing equipment includes ceilometers, lidar, and camera-based imagers (e.g., Skyviewer (PREDE Inc. [6,13]), Whole Sky Imager (WSI) (Marine Physical Laboratory [14,15]), and Total Sky Imager (TSI) (Yankee Environmental System (YES) Inc. [16])). Among these, ceilometers and lidar, which are equipment that irradiates a laser directly to an object and detects the return signal, cannot determine cloud information for the entire sky because they can only detect a part of the sky [10]. Camera-based devices, on the other hand, have a number of useful benefits for detecting clouds, such as the observation area (coverage), image resolution, and observation period [6,17,18].”

 

  1. The basis for the choices of boundary values of 0.75, 1 and the scheme of mean RBR×0.8 are not clearly explained. The sensitivities of the resulting cloud cover retrievals to the choice of these boundary values are not shown either. In order to show the nighttime sky imager cloud cover is objective, such choices of boundary values and schemes have to be established using a trustworthy and independent cloud cover dataset. If the cloud cover from human observers are used, then the training dataset and the dataset used for evaluation should be considered separately. If such practice has been carried out already, please describe it more clearly in the manuscript. Currently, the retrieval method is very interesting and promising, but it needs to be constructed more clearly and carefully to be meaningful and to convince the readers that this algorithm is actually objective.

We did not use independent test data sets to create threshold conditions. As shown in Figure 5, RBR and luminance characteristics were analyzed using a few images, and strict RBR thresholds were given for all cases to calculate cloud cover. The results are the same as written in the manuscript. In other words, in the case of a cloud-free image, the cloud pixel detection was minimized by setting the RBR threshold to 1, a high value. In the case of a cloudy image, the cloud pixel detection was maximized by setting the RBR threshold to 0.75 or RBR * 0.8.

 

Suggestion:

Have you examined the spatial scale of the cloud cover variability around the site? If the cloud cover variability is mostly homogeneous around the site, say in a radius of tens of kilometers, why not use satellite cloud cover as another independent objective cloud cover dataset to evaluate the sky imager retrievals? I think this might be something to consider for your future studies.

We have not conducted a study of spatial variability in cloud cover. Clouds detected from a satellite (like a geostationary satellite) appear the same size/coverage regardless of cloud height. However, in the case of ground-based observation, the closer the object is to the camera lens or human eye, the larger it looks. Therefore, if the altitude of the cloud is taken into account, we think that satellite and ACOS cloud cover can be compared. Since satellite data are more objective and continuous as compared to human observation, we plan to use it for ACOS cloud verification in the future.

 

Minor comments:

Line 28: “radiant” to “radiative”

We have revised this word.

 

Line 30-32: This sentence is too complicated and its meaning is not clearly expressed. Please rephrase.

We have revised this sentence follow as:

“Their interaction with aerosols results in global cooling as well as the greenhouse effect owing to changes in their physical characteristics. Such interactions have an effect on precipitation efficiency, which in turn affects vegetation and water resource security [1–3].”

 

Line 32-34: I may guess what this sentence is trying to say but it is very confusing. Do you mean clouds reduces the solar radiation that reaches the surface?

We have revised this sentence follow as:

“In addition, the cloud cover and cloud type are meteorological elements that increase uncertainty in predicting climate and meteorological phenomena because they perform the dual actions of reflecting the solar radiation and absorbing the earth radiation emitted from the surface [4,5].”

 

Line 36: “production of cloud information” to “observations of clouds”?

We have revised this sentence follow as:

“Therefore, the observations of clouds, including high-frequency, high-accuracy cloud cover and cloud type, is required as input data for radiation, weather forecast, and climate models [6,7].”

 

Line 38-39: This is not true. As the authors mentioned later in line 52-59, many ground-based remote sensing instruments provide automated cloud cover observations. A lot of them have been part of the routine observations at these sites for more than two decades.

We have revised this sentence follow as:

“Even though cloud cover represents the main observation data for global weather and climate, it is one of the meteorological variables for which ground-based automatic observation has not yet been performed, i.e., cloud cover data (octa or tenth) has only been recorded through the eyes of a human observer; thus, it is based on the subjective judgment of the observer [8,9].”

 

Line 51-52: I assume the authors are referring to the passive sensors, such as the ones on MODIS. But the spatial resolution depends on the specific instruments. Some active sensors are on polar orbiters as well, and they have a much higher spatial resolution.

We agree with you. We have added satellite information in this sentence follow as:

“In the case of polar-orbiting satellites, there is a relatively high spatial resolution of 250 m  250 m (i.g., MODIS), but they can only detect the same point twice a day [3,11,12].”

 

Line 70,72: “region” to “wavelength”?

We think the word “Region” is more suitable.

 

Fig.2: The cloud cover tenth values in the subplot titles do not make sense and are not consistent with the caption.

We have revised Fig. 2 caption and added description for Fig. 2 follow as:

“Figure 2 shows the ACOS images for each case of 0–10 tenths of cloud covers recorded in DROM as an example. In Figures 2a–2f, the ACOS image taken for each case was overcast, and the DROM cloud cover was recorded as 0–5 tenths. In Figures 2g–2i, the ACOS image taken for each case was cloud-free, and the DROM cloud cover was recorded as 6-8 tenths. In Figures 2j and 2k, the ACOS images show partly cloud and mostly cloud, respectively; however, the DROM cloud cover was recorded as 9 and 10 tenths.”

 

Line 160-162: What is the projection model of this fish eye lens? Equidistance? Have you done any calibration of the lens, such as with chessboard-like process?

We have revised this sentence follow as:

“For the orthogonal projection distortion correction of the sky images obtained by removing obstacle pixels from all the pixels within the 80° SZA, the correction of the x and y axes was performed using Equations (4)–(8) [3,18,35–37].”

 

Line 203: How does “southern direction” translate in the x-y grid of Fig. 5?

We have added the direction information follow as:

“As a high-rise apartment complex was located within 1 km to the south of DROM at which ACOS was installed, luminance was high (Figure 5e) in the southern direction (refer to Figure 3) despite the clear sky, as shown in Figure 5b.”

 

Line 211 to 215: How do you determine the boundary values of 1 and 0.75? If they very image by image, are they determined objectively? Is the RBR×0.8 in Fig. 4 the way to determine the boundary values? How do you determine to adopt this threshold? How sensitive are your results to this choice? Right now this is a linear fit through the origin. What if the linear fit does not go through the origin, for example using RBR×0.75+.003? What happens if a nonlinear fit is used?

We have added the sentence follow as:

“In all cases, the RBR threshold was determined for each image based on the classification of the image.”

 

If the threshold is not adjusted by more than 10%, the calculated cloud cover does not change significantly. In other words, cloud cover can vary based on the threshold, but not substantially. We used simple thresholds instead of nonlinear thresholds because we did not want to artificially fit the thresholds by modifying the dataset.

 

Fig. 6 to 7: Because the nature of cloud cover, the frequency occurrence of clear sky and overcast are much larger than in other bins. The variability in other bins are not obvious with the current colorbar. Maybe compare the frequency occurrence/histograms in each cloud cover bins of the two observations directly as well?

The frequency of agreement, except for cloud cover of cloud-free and overcast conditions, shows a range of less than 7%, but we think it is useful to convey the distribution visually.

Reviewer 2 Report

In this article the authors present a new algorithm for nighttime cloud cover retrieval based on automatic camera observations. The novelty and the originality of this work is the decision to perform an automated cloud cover detection at night without using additional devices beyond camera or IR data. The research is well defined, and the implemented method is clearly descripted. Result and conclusion sections are easy to read and to understand. For those reasons, I deem this work ready to be published after minor revisions. Thus, comments given below are suggested to be followed:

 

Page 2 Line 61 - please consider eliminating explicit references to previous studies; remote sensing uses the bracket-number convention for citation, so my suggestion is to revisit and reformulate this sentence;

 

Page 3 Line 99 – brightness is a physical quantity with a specific unit of measure (lux). In my opinion it is better to address value between 0 and 255 as “digital number”. Please consider this change here and in later parts of the text;

 

Page 5 Line 125 – my suggestion is to reformulate Figure 2 caption in order to highlight that these are examples of images discarded because clearly human or ACOS system errors have been recognised;

 

Page 14 Line 347 – like above (Page 2 Line 61).

Author Response

In this article the authors present a new algorithm for nighttime cloud cover retrieval based on automatic camera observations. The novelty and the originality of this work is the decision to perform an automated cloud cover detection at night without using additional devices beyond camera or IR data. The research is well defined, and the implemented method is clearly descripted. Result and conclusion sections are easy to read and to understand. For those reasons, I deem this work ready to be published after minor revisions. Thus, comments given below are suggested to be followed:

We appreciate the valuable opinions and comments of the reviewer, thanks to which we could revise our manuscript to largely enhance its quality. We have made the following revisions in accordance with the additional comments of the reviewer.

 

Page 2 Line 61 - please consider eliminating explicit references to previous studies; remote sensing uses the bracket-number convention for citation, so my suggestion is to revisit and reformulate this sentence;

We have revised this sentence follow as:

“Therefore, many studies [23–26] calculated nighttime cloud cover by mounting an infrared (IR) sensor or IR filter or by detecting the radiance emitted from clouds and the sky using a sky scanner.”

 

Page 3 Line 99 – brightness is a physical quantity with a specific unit of measure (lux). In my opinion it is better to address value between 0 and 255 as “digital number”. Please consider this change here and in later parts of the text;

We have revised this sentence follow as:

“The images captured by ACOS were processed by converting the red, green, and blue (RGB) channels for each pixel in the image into a brightness between 0 and 255 as digital number.”

 

Page 5 Line 125 – my suggestion is to reformulate Figure 2 caption in order to highlight that these are examples of images discarded because clearly human or ACOS system errors have been recognised;

We have revised Fig. 2 caption and added description for Fig. 2 follow as:

“Figure 2 shows the ACOS images for each case of 0–10 tenths of cloud covers recorded in DROM as an example. In Figures 2a–2f, the ACOS image taken for each case was overcast, and the DROM cloud cover was recorded as 0–5 tenths. In Figures 2g–2i, the ACOS image taken for each case was cloud-free, and the DROM cloud cover was recorded as 6-8 tenths. In Figures 2j and 2k, the ACOS images show partly cloud and mostly cloud, respectively; however, the DROM cloud cover was recorded as 9 and 10 tenths.”

“Images captured by ACOS for each DROM human observation cloud cover (0–10 tenths, a)–k)).”

 

Page 14 Line 347 – like above (Page 2 Line 61).

We have revised this sentence follow as:

“Many studies [23,26,35,43,44] conducted research on nighttime cloud cover detection using infrared (IR) sensors.”

Reviewer 3 Report

The focus of the paper should be reconsidered significantly.

 

Review of "Development of an algorithm for nighttime cloud cover retrieved from the Automatic Cloud Observation System (ACOS)" by Bu-Yo Kim and Joo Wan Cha

 

In the manuscript, an approach for nighttime total cloud cover (TCC) retrieval is presented. The main advantage of the whole approach seems to be the ability of the camera (Canon EOS 6D) to capture scenes with the very high ISO (25600) and with the very long exposure (up to 5s.) There is also a questionable factor making this approach work, namely a third-party light source emitting by the residential area located close to the observational platform.

 

The manuscript has a reasonable structure, and it is written in a clear manner. Technically, the presented study looks somewhat consistent. The English of the manuscript is easy to understand. There are, however, several fundamental flaws in this study.

 

First, the exclusion of the 847 of 3944 cases is not justified in any appropriate way. In the text, the explanation is the following: "As shown in Figure 2, the human observation data were compared with the ACOS image data, and 847 cases (approximately 21% of all cases) that exhibited large differences (human observation error) were excluded from the analysis." I cannot understand why are the large differences in human observations with ACOS output are referred to as human observation errors. As far as I am aware of WOS recommendations regarding cloud observations [1], human observations are supposed to be the most reliable ones. In light of these recommendations, I would strongly disagree with this exclusion. The mentioned exclusion also refers to Figure 2, which is meant to help to understand the adjustment. However, Figure 2 just presents the examples of the imagery acquired with the camera of the ACOS package. There are no examples of the cases that are excluded as well as there is no explanation of the causes of human observation errors. Without the full understanding of the origins of the mentioned exclusion of cases, there is no way to consider the presented quality of the nighttime cloud cover retrieval to be reliable. The error in the quality estimates may exceed 10% and more since the excluded cases are about 21% of the whole set of observations. This kind of flaw cannot be considered as a minor one, and the design of the study should be revised regarding the data preprocessing and clearance.

 

I would like to draw the authors` attention to one more issue of their manuscript. In Section 3.1, the description of the distortion correction is provided (L154-178). As it is mentioned in the text, the main reason for the need for this correction is the fact that an actual observer sees the sky hemisphere differently compared to the image of the fish-eye-equipped camera (L177). There is no doubt that an observer does not see a sky hemisphere the same way as we see it at a distorted image. However, there is no doubt that an observer does not see a sky hemisphere the same way as we see it at a distortion-corrected picture, either. A human observes the skydome with his/her eyes turning around several times. He/she estimates total cloud cover based on the model of the current sky state, which is created by his/her brain after some kind transformations of the series of "images" captured by his/her eyes. The field of near-peripheral view of a human eye is pretty small – about 60 degrees. So the images that an observer's brain is processing are nearly "flat." So, in my understanding, the model of the sky is created from the eye-captured "images" that are distorted almost the same way as it is distorted in the fish-eye lens (small regions of skydome are projected to small regions of a resulting image with the distortions introduced by the angle of view which are close to 90° near the nadir and close to 0° near the horizon). Human eye sees the distortions that are somewhat similar to the ones of a camera equipped with a fish-eye lens. However, the brain has its model of the sky state, which is 3D. We cannot assume that the brain "flattens" it the same way that we try to de-distort during the image preprocessing. Nor the referred papers [2–5] do not justify the need for the distortion correction neither the presented study. I have to remind the authors that the primary goal of the study (following its title) is to create an algorithm for the retrieval of total cloud cover, rather than correct an image for it to look better or flatter. Since we don't know the model of the transformation that a human brain applies to the set of "images" captured by the eyes, we cannot model it with some kind of correction of a fish-eye image. The measure of success of the presented algorithm should not be some prettiness of a corrected fish-eye image. Instead, it should be the accuracy of estimations of TCC compared to human observations. This accuracy should be an optimized metric. In the light of the abovementioned reasons, the application of the distortion correction looks like a random unjustified transformation, which impacts on the quality of TCC retrieval is not assessed.

 

One more serious issue of the presented study is related to the light emission of the residential area mentioned in the text multiple times (L201-202, 323-324). As far as I understand, the impact of this light emission has two sides. This apartment complex is, in fact, the main source of the light highlighting the clouds, and it is the only cause for the presented approach to work at all. On the other hand, the light source of this residential area has its flaws: it may be unstable; it may be affected by blackouts or pandemic-related changes in the lighting of the city. In general, it is strongly dependent on the artificial factors of social origin. Since the main goal of the study is to create a system for the TCC observations, there should not be impactful factors of any social origin in the whole approach. Alternatively, the impact of this kind of factors should be assessed and mentioned clearly. One more issue of this light source is that the light is non-uniform. This issue is described in the manuscript and is considered in the algorithm. However, this way, the presented algorithm became specific to one particular point of the planet and cannot be used in any other place. It even cannot be adapted to any other observational point of the planet since it is dependent on the third-party light source of the origin that is not reliable (in contrast to the Sun, which is obviously presented at the sky in the daytime for most of the observational platforms.) Considering this, the presented algorithm is useless in any other setup. Even though there is a remarkable optical package being able to see clouds in the nighttime, the outcome of this observational station has unpredictable uncertainties due to the impact of socially determined factors. This is not the type of observations that should be considered reliable in any meteorological studies.

 

 

Since the focus of the presented study is the algorithm for TCC retrieval, the lack of known studies on TCC retrieval algorithms in the Introduction section is a significant issue.

 

One more issue of the presented study is related to the method of the TCC calculation. Today, in 2020, there are a lot of papers similar to the presented one (meaning the algorithm description) that present their human-crafted, thresholds-based algorithms for TCC acquisition from all-sky optical imagery [2,3,6–10]. The authors even mentioned some of them. It seems to me that there is no way to improve the existing algorithms even when the images are acquired with a new optical package, and there is a need for modifications of color channels due to the night time of observations. It is just a matter of the values of thresholds. Moreover, with all the considerations that a human researcher may propose, the implemented algorithms will unlikely outperform data-driven methods [11–13]. And even if the authors choose to stick to human-crafted features of pixels and human-crafted sky-vs-cloud discrimination rules, the comparison with other algorithms should be presented in the Results section. Otherwise, the focus of the manuscript should not be the algorithm, but some other aspects of the study.

 

The presentation of the results needs to be improved. There is no sense in the presentation of the agreement of the algorithm results as fractions until the frequency of particular TCC tenths are not specified. As it is mentioned in the manuscript, the dataset is highly unbalanced with respect to tenths of TCC. For example, considering the relative frequency of 20,77% of 10 tenths in summertime (L252), the high correspondence shown in Figure 7b may mislead a reader. In fact, a constant algorithm always giving 10 tenths in summer would be as precise as the presented one for 10 tenths. So, there is a strong need for a rebalancing of the dataset with respect to the target variable (TCC) in order to present a reliable assessment of the quality of the approach.

 

The quality measure is also an issue of the presented study. First, the authors should clearly describe measures of quality in the methods section. One of them is clearly RMSE (mentioned in the Results section), which is once again not reliable in the case of datasets that are highly unbalanced with respect to the target variable (TCC). Another one is the Pearson correlation (also mentioned only in the Results section); the note about the unbalanced datasets applies here as well.

 

The presented study seems novel to me for the only reason: as of my best knowledge, there are no such setups with optical packages that would be able to capture clouds in the nighttime. However, considering all the abovementioned issues, I would strongly disagree that the study may be useful in scientific measurements of TCC.


Alternatively, the authors may shift the focus of the manuscript to the description of the optical package, assess the impacts of the variability of the third-party light source, and the distortion of the fish-eye lens.

 

References


1. Guide to meteorological instruments and methods of observation, Chapter 15 “Observations on clouds”, 15.2 “Estimation and observation of cloud amount, height and type”; World Meteorological Organization: Geneva, Switzerland, 2008;
2. Yang, J.; Min, Q.; Lu, W.; Yao, W.; Ma, Y.; Du, J.; Lu, T. An automated cloud detection method based on green channel of total sky visible images. Atmospheric Measurement Techniques Discussions 2015, 8.
3. Lothon, M.; Barnéoud, P.; Gabella, O.; Lohou, F.; Derrien, S.; Rondi, S.; Chiriaco, M.; Bastin, S.; Dupont, J.-C.; Haeffelin, M.; et al. ELIFAN, an algorithm for the estimation of cloud cover from sky imagers. Atmospheric Measurement Techniques 2019, 12, 5519–5534, doi:10.5194/amt-12-5519-2019.
4. Dev, S.; Savoy, F.M.; Lee, Y.H.; Winkler, S. WAHRSIS: A low-cost high-resolution whole sky imager with near-infrared capabilities. In Proceedings of the Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXV; International Society for Optics and Photonics, 2014; Vol. 9071, p. 90711L.
5. Cazorla Cabrera, A.; Shields, J.E.; Karr, M.E.; Olmo Reyes, F.J.; Burden, A.; Alados-Arboledas, L. Technical Note: Determination of aerosol optical properties by a calibrated sky imager. 2009.
6. Long, C.; DeLuisi, J. Development of an automated hemispheric sky imager for cloud fraction retrievals.; 1998; pp. 171–174.
7. Krinitskiy, M.A.; Sinitsyn, A.V. Adaptive algorithm for cloud cover estimation from all-sky images over the sea. Oceanology 2016, 56, 315–319, doi:10.1134/S0001437016020132.
8. Yamashita, M.; Yoshimura, M.; Nakashizuka, T. Cloud cover estimation using multitemporal hemisphere imageries. International Archives of Photogrammetry Remote Sensing and Spatial Information Sciences 2004, 35, 826–829.
9. Yamashita, M.; Yoshimura, M. Ground-based cloud observation for satellite-based cloud discrimination and its validation. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2012, 39, B8.
10. Kazantzidis, A.; Tzoumanikas, P.; Bais, A.F.; Fotopoulos, S.; Economou, G. Cloud detection and classification with the use of whole-sky ground-based images. Atmospheric Research 2012, 113, 80–88, doi:10.1016/j.atmosres.2012.05.005.
11. Krinitskiy, M. Artificial neural networks for total cloud cover estimation and solar disk state detection using all sky images. In Proceedings of the Geophysical Research Abstracts; 2018; Vol. 20, pp. EGU2018-18036.
12. Krinitskiy, M. Cloud cover estimation optical package: New facility, algorithms and techniques. AIP Conference Proceedings 2017, 1810, 080009,
doi:10.1063/1.4975540.
13. Dev, S.; Lee, Y.H.; Winkler, S. Color-Based Segmentation of Sky/Cloud Images From Ground-Based Cameras. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2017, 10, 231–242, doi:10.1109/JSTARS.2016.2558474.

Author Response

The focus of the paper should be reconsidered significantly.

Review of "Development of an algorithm for nighttime cloud cover retrieved from the Automatic Cloud Observation System (ACOS)" by Bu-Yo Kim and Joo Wan Cha

In the manuscript, an approach for nighttime total cloud cover (TCC) retrieval is presented. The main advantage of the whole approach seems to be the ability of the camera (Canon EOS 6D) to capture scenes with the very high ISO (25600) and with the very long exposure (up to 5s.) There is also a questionable factor making this approach work, namely a third-party light source emitting by the residential area located close to the observational platform.

The manuscript has a reasonable structure, and it is written in a clear manner. Technically, the presented study looks somewhat consistent. The English of the manuscript is easy to understand. There are, however, several fundamental flaws in this study.

We appreciate the valuable opinions and comments of the reviewer, thanks to which we could revise our manuscript to largely enhance its quality. We have made the following revisions in accordance with the additional comments of the reviewer.

 

First, the exclusion of the 847 of 3944 cases is not justified in any appropriate way. In the text, the explanation is the following: "As shown in Figure 2, the human observation data were compared with the ACOS image data, and 847 cases (approximately 21% of all cases) that exhibited large differences (human observation error) were excluded from the analysis." I cannot understand why are the large differences in human observations with ACOS output are referred to as human observation errors. As far as I am aware of WOS recommendations regarding cloud observations [1], human observations are supposed to be the most reliable ones. In light of these recommendations, I would strongly disagree with this exclusion. The mentioned exclusion also refers to Figure 2, which is meant to help to understand the adjustment. However, Figure 2 just presents the examples of the imagery acquired with the camera of the ACOS package. There are no examples of the cases that are excluded as well as there is no explanation of the causes of human observation errors. Without the full understanding of the origins of the mentioned exclusion of cases, there is no way to consider the presented quality of the nighttime cloud cover retrieval to be reliable. The error in the quality estimates may exceed 10% and more since the excluded cases are about 21% of the whole set of observations. This kind of flaw cannot be considered as a minor one, and the design of the study should be revised regarding the data preprocessing and clearance.

We consider the cloud cover observed by humans to be accurate and used it as verification data for ACOS cloud cover. In this study, the maximum number of data points (3,944) on human observations was collected and analyzed, excluding the cases where ACOS data were missing. However, there were other cases of observation that differed from the cloud cover shown in the ACOS image. The representative images are shown in Figure 2. A total of 847 cases including those in Figure 2 were excluded from the analysis for the following reasons:

 

  1. Did the observer go to the field before observation to adjust their eyes to the dark environment in order to observe the clouds at night?
  2. Were the clouds observed and recorded on time?
  3. Did the observer observe the clouds (only a portion of the clouds) from the window at the observatory office without going to the field?
  4. Were observations recorded with reference to the cloud cover estimated by the ceilometer, which is automatically measured and recorded, without going to the field?

 

We encounter ethical problems regarding observers. We think there are several such unconfirmed human observation issues Despite their subjectivity, human observations still provide the most reliable data; however, unethical observation errors cannot be completely overlooked. Therefore, we believe that it is desirable to remove data that indicate human observation errors, such as the ACOS image case shown in Figure 2, before analysis.

 

And we have revised Fig. 2 caption and added description for Fig. 2 follow as:

“Figure 2 shows the ACOS images for each case of 0–10 tenths of cloud covers recorded in DROM as an example. In Figures 2a–2f, the ACOS image taken for each case was overcast, and the DROM cloud cover was recorded as 0–5 tenths. In Figures 2g–2i, the ACOS image taken for each case was cloud-free, and the DROM cloud cover was recorded as 6-8 tenths. In Figures 2j and 2k, the ACOS images show partly cloud and mostly cloud, respectively; however, the DROM cloud cover was recorded as 9 and 10 tenths.”

“Images captured by ACOS for each DROM human observation cloud cover (0–10 tenths, a)–k)).”

 

I would like to draw the authors` attention to one more issue of their manuscript. In Section 3.1, the description of the distortion correction is provided (L154-178). As it is mentioned in the text, the main reason for the need for this correction is the fact that an actual observer sees the sky hemisphere differently compared to the image of the fish-eye-equipped camera (L177). There is no doubt that an observer does not see a sky hemisphere the same way as we see it at a distorted image. However, there is no doubt that an observer does not see a sky hemisphere the same way as we see it at a distortion-corrected picture, either. A human observes the skydome with his/her eyes turning around several times. He/she estimates total cloud cover based on the model of the current sky state, which is created by his/her brain after some kind transformations of the series of "images" captured by his/her eyes. The field of near-peripheral view of a human eye is pretty small – about 60 degrees. So the images that an observer's brain is processing are nearly "flat." So, in my understanding, the model of the sky is created from the eye-captured "images" that are distorted almost the same way as it is distorted in the fish-eye lens (small regions of skydome are projected to small regions of a resulting image with the distortions introduced by the angle of view which are close to 90° near the nadir and close to 0° near the horizon). Human eye sees the distortions that are somewhat similar to the ones of a camera equipped with a fish-eye lens. However, the brain has its model of the sky state, which is 3D. We cannot assume that the brain "flattens" it the same way that we try to de-distort during the image preprocessing. Nor the referred papers [2–5] do not justify the need for the distortion correction neither the presented study. I have to remind the authors that the primary goal of the study (following its title) is to create an algorithm for the retrieval of total cloud cover, rather than correct an image for it to look better or flatter. Since we don't know the model of the transformation that a human brain applies to the set of "images" captured by the eyes, we cannot model it with some kind of correction of a fish-eye image. The measure of success of the presented algorithm should not be some prettiness of a corrected fish-eye image. Instead, it should be the accuracy of estimations of TCC compared to human observations. This accuracy should be an optimized metric. In the light of the abovementioned reasons, the application of the distortion correction looks like a random unjustified transformation, which impacts on the quality of TCC retrieval is not assessed.

We agree in part with the reviewer's comment. Unlike a camera, a human observes various directions of the hemisphere sky and comprehensively judges and records the clouds. On the other hand, a camera equipped with a fisheye lens photographs all the hemisphere areas, but cannot store them in three dimensions. Therefore, distortion occurs in the process of saving as a 2D image. If the images were stored as an equidistant projection, there would be no difference in cloud coverage due to distortion. However, we assume that the 2D image taken in ACOS was an orthogonal projection and corrected it (Hughes et al., 2010, Cłapa et al., 2014). The difference in the cloud cover between distortion corrected and not corrected was a few percent.

 

And we have added a sentence related distortion correction follow as:

“In other words, the object photographed in the process of storing the 3-D sky dome image as a 2-D image may be distorted, and the object may have a cloud coverage that is different from that of the actual object [11]. Particularly, the difference is large in the edge region of the image.”

“For the orthogonal projection distortion correction of the sky images obtained by removing obstacle pixels from all the pixels within the 80° SZA, the correction of the x and y axes was performed using Equations (4)–(8) [3,18,35–37].”

 

One more serious issue of the presented study is related to the light emission of the residential area mentioned in the text multiple times (L201-202, 323-324). As far as I understand, the impact of this light emission has two sides. This apartment complex is, in fact, the main source of the light highlighting the clouds, and it is the only cause for the presented approach to work at all. On the other hand, the light source of this residential area has its flaws: it may be unstable; it may be affected by blackouts or pandemic-related changes in the lighting of the city. In general, it is strongly dependent on the artificial factors of social origin. Since the main goal of the study is to create a system for the TCC observations, there should not be impactful factors of any social origin in the whole approach. Alternatively, the impact of this kind of factors should be assessed and mentioned clearly. One more issue of this light source is that the light is non-uniform. This issue is described in the manuscript and is considered in the algorithm. However, this way, the presented algorithm became specific to one particular point of the planet and cannot be used in any other place. It even cannot be adapted to any other observational point of the planet since it is dependent on the third-party light source of the origin that is not reliable (in contrast to the Sun, which is obviously presented at the sky in the daytime for most of the observational platforms.) Considering this, the presented algorithm is useless in any other setup. Even though there is a remarkable optical package being able to see clouds in the nighttime, the outcome of this observational station has unpredictable uncertainties due to the impact of socially determined factors. This is not the type of observations that should be considered reliable in any meteorological studies.

We agree in part with the reviewer's comment. Light pollution caused by artificial light sources is generated around the DROM where ACOS is installed. Light pollution can impact regions that are tens of kilometers across; the closer the artificial light source, the greater the impact (Jechow et al., 2017). Therefore, luminance is greater in the edge region of the image even if it is not a cloud region. Therefore, the RBR boundary value was set by classifying images based on the characteristics of luminance at the edges and center. This algorithm can be applied even without an artificial light source. As no areas are misdetected, this method enables better classification of the sky and clouds.

 

And we have revised a sentence and added a reference follow as:

“In the edge region of the image, the luminance is large, even in the cloud-free region. It can be increased from atmospheric turbidity as well as the light pollution around the observation equipment [8,22,40,43].”

 

Since the focus of the presented study is the algorithm for TCC retrieval, the lack of known studies on TCC retrieval algorithms in the Introduction section is a significant issue.

One more issue of the presented study is related to the method of the TCC calculation. Today, in 2020, there are a lot of papers similar to the presented one (meaning the algorithm description) that present their human-crafted, thresholds-based algorithms for TCC acquisition from all-sky optical imagery [2,3,6–10]. The authors even mentioned some of them. It seems to me that there is no way to improve the existing algorithms even when the images are acquired with a new optical package, and there is a need for modifications of color channels due to the night time of observations. It is just a matter of the values of thresholds. Moreover, with all the considerations that a human researcher may propose, the implemented algorithms will unlikely outperform data-driven methods [11–13]. And even if the authors choose to stick to human-crafted features of pixels and human-crafted sky-vs-cloud discrimination rules, the comparison with other algorithms should be presented in the Results section. Otherwise, the focus of the manuscript should not be the algorithm, but some other aspects of the study.

We agree with the reviewers' comments. Unfortunately, most algorithms are used to detect clouds during the day, and most that can detect clouds at night have been tested using IR channels. These studies did not compare the results to human observations of long-term cloud cover, instead comparing the algorithm to measurements by specific equipment or specific cases. Though this study has similarities to previous work, it differs in terms of its purpose of detecting nighttime clouds with a simple method that only uses the RGB channel of a typical camera without relying on other equipment, and evaluating output accuracy through comparative analysis with long-term human observation data.

 

The presentation of the results needs to be improved. There is no sense in the presentation of the agreement of the algorithm results as fractions until the frequency of particular TCC tenths are not specified. As it is mentioned in the manuscript, the dataset is highly unbalanced with respect to tenths of TCC. For example, considering the relative frequency of 20,77% of 10 tenths in summertime (L252), the high correspondence shown in Figure 7b may mislead a reader. In fact, a constant algorithm always giving 10 tenths in summer would be as precise as the presented one for 10 tenths. So, there is a strong need for a rebalancing of the dataset with respect to the target variable (TCC) in order to present a reliable assessment of the quality of the approach.

Cloud fluctuations may vary depending on the region and season, but it is thought that cloud-free and overcast cases will occur at a frequency of 40–60% of the total. The cloud cover calculation algorithm in this study was not specialized for a specific amount of cloud cover. The algorithm in this study calculates cloud cover in all images by setting a suitable RBR threshold according to the luminance and RBR of the image. The results of the analysis have great significance for comparison with long-term observational data, showing that adjustment of the dataset can be confusing for analysis.

 

The quality measure is also an issue of the presented study. First, the authors should clearly describe measures of quality in the methods section. One of them is clearly RMSE (mentioned in the Results section), which is once again not reliable in the case of datasets that are highly unbalanced with respect to the target variable (TCC). Another one is the Pearson correlation (also mentioned only in the Results section); the note about the unbalanced datasets applies here as well.

We have added a description of bias, RMSE, and R in Section 2. We think that the distribution of data can be confirmed in the results section.

“For the frequency analysis in all cases and seasons, the accuracy of the calculated cloud cover was evaluated using the bias, RMSE, and correlation coefficient (R) as shown in Equations (1)–(3).”

“Here, M represents the cloud cover calculated from the algorithm proposed in this study, O represents the human-observed DROM cloud cover observed, and N represents the total number of cases (3,097).”

 

The presented study seems novel to me for the only reason: as of my best knowledge, there are no such setups with optical packages that would be able to capture clouds in the nighttime. However, considering all the abovementioned issues, I would strongly disagree that the study may be useful in scientific measurements of TCC.

Alternatively, the authors may shift the focus of the manuscript to the description of the optical package, assess the impacts of the variability of the third-party light source, and the distortion of the fish-eye lens.

We agree with the reviewer's comment to some extent. For the reviewer's concerns to be addressed, the overall composition, results, and analysis method of the paper must completely re-written. However, other reviewers made no comments in opposition to the methods and results of this study. We would like for the reviewer to reconsider our study in a positive light.

Round 2

Reviewer 3 Report

Review #2 of "Development of an algorithm for nighttime cloud cover retrieved from the Automatic Cloud Observation System (ACOS)" by Bu-Yo Kim and Joo Wan Cha

 

I appreciate the commitment of the authors on the path to the better look of the manuscript. However, I believe, the very fundamental changes should be made, rather than the ones that just made the paper look better.

 

[minor] In my understanding, the exclusion of 847 images was not adjusted convincingly. I would recommend the authors stress the fact that the exclusion was made based upon the knowledge of the issues of the human observations that may take place at the particular observational station. If the only way for the authors to detect this kind of issue was via the large difference between ACOS estimates and DROM evidence, it should be clearly mentioned in the text. This may be the only way to indicate the rejection of human observations. Without this clear explanation of the reason and method for the source data cleaning, the results cannot be considered reliable.

 

[major] Regarding the light pollution, I cannot agree with the authors. If I understand correctly, without this light source (artificial one), the clouds would not be visible no matter how sensitive camera would be installed and no matter how long the exposure would be. Thus, the method is still strongly dependent on the presence of an artificial ground-based source of light. In this regard, I would again stress that the presented algorithm is useless in any other setup or needs to be reformulated completely. Also, the uncertainties of the TCC retrieval due to the instability of the artificial light source are not estimated. This is still not the way of measurements that would be considered applicable in meteorological studies.

 

[major] One more issue which the authors were not intended to address in the revised version of the manuscript is the question of the optimality of the presented algorithm. This issue has multiple sides. One of them is the absence of the discussion of existing algorithms. Note that there are several algorithms in recent papers on the TCC retrieval. I agree that the source of the imagery is usually an optical package that could not be able to retrieve an image in the nighttime. However, the very essence of the existing algorithms for TCC retrieval is their optimization nature: every threshold of every color ratio is optimized based upon some objective. This way, these thresholds are found to be “suitable.” The approach presented in the manuscript looks arbitrary to me since there is no adjustment, which criteria were used when the authors tune the threshold values and make them “suitable”. The algorithm of the study is presented in a declarative way. The phrase “In all cases, the RBR threshold was determined … based on the classification of the image” [L242-243] does not adjust the choice of the RBR threshold. Since there is a general approach of optimizing
parameters of algorithms that are based upon some generic colors’ combination, I cannot see any novelty in the presented algorithm except the branch that takes non-uniformity of the artificial light source into account. However, the impact of this branch was not assessed or even described. The second side of the non-optimality issue is the absence of any comparison of the presented algorithm with the existing ones. Since the existing algorithms are generic and imply the optimization of their parameters (e.g., thresholds), it doesn`t matter which kind of imagery are the source data. I cannot understand the sense of a study that does not provide any comparison with the existing approaches (either human-designed algorithms or data-driven models).

 

[major] In my opinion, the novelty of the presented algorithm is insignificant in the present form. The title “Development of an algorithm…” does not reflect the content of the manuscript since no development of algorithms was actually made. The corrections of the color distortions imposed by non-uniform light pollution cannot be considered significant. Instead, they should be regarded as pre-processing of source imagery. I would recommend the authors consider changing the title and the main focus of the study. As an alternative, the manuscript may be titled after the ACOS package or the observational campaign resulting in the presented dataset of nighttime skydome imagery.

 

[minor] I still cannot see the characteristics of the dataset in the manuscript regarding the frequency of the images corresponding to different TCC classes registered by DROM. Without this information, I cannot see a way to assess the quality of the presented algorithm given the displayed results.

 

Back to TopTop