Next Article in Journal
Remotely Sensed Variables of Ecosystem Functioning Support Robust Predictions of Abundance Patterns for Rare Species
Previous Article in Journal
Large Scale Agricultural Plastic Mulch Detecting and Monitoring with Multi-Source Remote Sensing Data: A Case Study in Xinjiang, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Nighttime Reflectance Generation in the Visible Band of Satellites

1
School of Space Research, Kyung Hee University, Gyeonggi-do 17104, Korea
2
Department of Environment, Energy, and Geoinfomatics, Sejong University, Seoul 05006, Korea
3
InSpace Co., Ltd., Daejeon 305-343, Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(18), 2087; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11182087
Submission received: 17 July 2019 / Revised: 2 September 2019 / Accepted: 2 September 2019 / Published: 6 September 2019
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Visible (VIS) bands, such as the 0.675 μm band in geostationary satellite remote sensing, have played an important role in monitoring and analyzing weather and climate change during the past few decades with coarse spatial and high temporal resolution. Recently, many deep learning techniques have been developed and applied in a variety of applications and research fields. In this study, we developed a deep-learning-based model to generate non-existent nighttime VIS satellite images using the Conditional Generative Adversarial Nets (CGAN) technique. For our CGAN-based model training and validation, we used the daytime image data sets of reflectance in the Communication, Ocean and Meteorological Satellite / Meteorological Imager (COMS/MI) VIS (0.675 μm) band and radiance in the longwave infrared (10.8 μm) band of the COMS/MI sensor over five years (2012 to 2017). Our results show high accuracy (bias = −2.41 and root mean square error (RMSE) = 36.85 during summer, bias = −0.21 and RMSE = 33.02 during winter) and correlation (correlation coefficient (CC) = 0.88 during summer, CC = 0.89 during winter) of values between the observed images and the CGAN-generated images for the COMS VIS band. Consequently, our CGAN-based model can be effectively used in a variety of meteorological applications, such as cloud, fog, and typhoon analyses during daytime and nighttime.

1. Introduction

The importance of weather and climate change information is increasing because of many human demands, including for leisure, business, natural disaster relief, and military operations. The satellite, the only tool for global observation of the Earth’s surface and atmosphere, plays a crucial role in monitoring global weather and climate change and providing short- to long-term analysis and predictions regarding the environment. In particular, geostationary meteorological satellites have important roles as sources of data for weather analysis; natural disasters such as typhoons, floods, and heavy rainfall; geophysical parameters such as sea surface temperature; and long-term records for climatic applications [1,2].
Geostationary satellites with sensors from visible (VIS) to infrared (IR) wavelengths have the advantages of relatively high temporal and spatial resolutions at which to observe the Earth’s atmosphere and surface because they observe the electromagnetic waves emitted, reflected, and scattered from the Earth passing through the atmosphere surrounding the Earth. However, VIS wavelengths have a disadvantage during the night. Thus, many national meteorological institutions operate geostationary meteorological satellites such as the Geostationary Operational Environmental Satellite-W (GOES-W), GOES-East (E), Himawari-7 (MTSAT-2), and Meteosat Second Generation (MSG) to meet basic requirements, namely, a geostationary image at least twice per hour [3]. Currently, the GOES-16, Himawari-8/9, Feng Yun-4A (FY-4A), Meteosat Third Generation (MTG), and GeoKompsat-2 Atmosphere (GK-2A) satellites operate with 16 channels including three VIS and 13 IR bands, a spatial resolution doubled from 4 to 2 km at the nadir for the IR channels, and a temporal resolution tripled from 30 to 10 min for full disk observation [4,5].
To observe clouds and the Earth’s surface, geostationary meteorological satellites use bands within the atmospheric window in which limited atmospheric absorption occurs. The common bands used by geostationary meteorological satellites are the VIS band in the 0.55 to 0.90 μm wavelength and IR bands in the 3.5 to 4.0 μm, 10.5 to 11.5 μm, and 11.5 to 12.5 μm in wavelength [6]. Generally, the former observes sunlight reflected from the earth’s surface. The 10.5–11.5 μm and 11.5–12.5 μm bands primarily observe the amount of thermal radiation emitted from the Earth’s surface and atmosphere. The 6.5–7.0-μm band observes the amount of water vapor (WV) in the upper and middle atmospheric layers. The 3.5–4.0 μm wavelength band, termed the shortwave IR (SWIR) band, mainly observes reflected sunlight during the daytime and IR radiation during the night. Thus, a different use of the SWIR band is necessary for day and night [7].
In this study, we present nighttime reflectance in the VIS band generated using artificial intelligence (AI); these cannot be observed from satellites as well as have so far not been generated. The CGAN technique was applied because the generation of nighttime reflectance in the VIS band was considered as a generative task solved by CGAN method. We use the Communication, Ocean, and Meteorological Satellite (COMS) of the Korean Meteorological Administration (KMA) for satellite data. Our results can be useful for a variety of meteorological applications such as analyses of fog, clouds, and typhoons for operational and research purposes.

2. Data

The Communication, Ocean, and Meteorological Satellite (COMS) satellite was successfully launched from 128.2 °E in 2010 and has been operated by the KMA with a spatial coverage of the Western Pacific region. Its Meteoritical Imager (MI) has one channel in the VIS spectrum (0.55 μm–0.80 μm) and four IR-sensing channels (SWIR; 3.5–4.0, WV; 6.5–7.0, IR1; 10.3–11.3, and IR2; 11.5–12.5 μm). The spatial resolutions of COMS/MI in the VIS and IR bands are 1 km and 4 km at the nadir, respectively [8]. The temporal resolutions of COMS/MI are every 3 h and every 30 min for the full disk and the Far-East Asia area including the Korean Peninsula, respectively.
We used the Far-East Asia area level 1 (L1B) images data of COMS/MI 1024 × 1024 pixels in size during the winter (December to February) and summer (June to August) seasons for 5 years from January 1, 2012 to December 31, 2017, to establish the AI-generated COMS images for training, validation, and test data [9]. These were obtained from the National Meteorological Satellite Center (NMSC) of the KMA. Table 1 summarizes the characteristics of the MI sensor on COMS.
Figure 1 shows the typical weather patterns of the Far-East Asia region during summer (a typhoon) and winter (snowfall). Figure 1a,b show examples of COMS-observed VIS reflectance and COMS IR radiance images on August 1, 2018, 04:00 UTC (13:00 Korean Standard Time (KST)) during summer. Typhoon Jongdari is approaching the Korean Peninsula and Japan. Figure 1c,d shows the COMS VIS reflectance and COMS IR radiance on January 1, 2018, 04:00 UTC (13:00 KST) during winter. Snow is falling in western Japan.

3. Methods

3.1. CGAN

In this study, our proposed model used the well-established CGAN [10] architecture known as Pix2Pix [11]. Thus far, GANs have been successfully applied to a variety of computer vision and image processing tasks [12]. The CGAN is extended from GAN [13] and deep convolutional GAN (DCGAN) [14]. The CGAN describes a minimum-maximum game between a generative model and a discriminative model [14]. Generally, GAN consists of a generative model and a discriminative model [13]. The generative model generates the virtual output image via training from the input image. The discriminative model plays a role in distinguishing the virtual output image from the real image via training. The CGAN process contains the following min-max value function ( G ):
G = min G   max D   L C G A N ( G , D ) + λ L 1 ( G )
where L C G A N is the CGAN loss function (GAN loss or adversarial loss); L 1 is the distance term (reconstruction loss or CNN loss); and G and D are the generator and discriminator, respectively. λ is the parameter to demonstrates the trade-off between the CGAN loss ( L C G A N ( G , D ) ) and L1 loss ( L 1 ( G ) ).
In general, the GAN system consists of generative model (G) and discriminative model (D) to provide a form of negative and adversarial feedback to minimalize min-max value function ( G ) during training and validation [13]. G is trained to capture the data distribution of a set of input images and generate the virtual images, while D is trained to discriminate whether its input images are the real input images or G ’s virtual images. The CGAN loss and L1 loss are originated from GAN method and CNN method, respectively.
The first term CGAN loss is described as follows:
L C G A N ( G , D ) = E x , y ( log D ( x , y ) ) + E x ( log ( 1 D ( x , G ( x , y ) ) )
where L C G A N ( G , D ) is the adversarial loss using D and G. E x , y ( log D ( x , y ) ) is the discriminator to maximize the probability of the training data and E x ( log ( 1 D ( x , G ( x ) ) ) is the discriminator to minimize the probability of the data sampled from the G. x , y , and y are the real input image, real output image, and virtual output image, respectively. The log function is adopted to relax the gradient insufficiency at the beginning of the training [12].
The second term CNN loss is expressed as follows:
L 1 ( G ) = E x , y ( || y G ( x , y ) || )
where L 1 ( G ) is the reconstruction loss to minimize the difference between real images ( x , y ) and virtual image ( y ).
In this study, we used a general-purpose CGAN framework termed Pix2Pix [11] for satellite image-to-image translation. Pix2Pix has an advantage of not using noise as an input to G. In addition, the CGAN loss (LCGAN (G,D)) in Pix2Pix is learned from the data. Pix2Pix also uses L1 loss (L1(G)) between the output of G and the ground truth [15]. In addition, we used COMS/MI observational data such as the COMS IR1 and VIS images corresponding to x and y, respectively.

3.2. Band Selection and Implementation

In this study, we implemented Pix2Pix [11] to process the pairs of daytime reflectance images and brightness temperature image data sets with 8 bits in the VIS (0.675 µm) and IR1 (10.8 µm) channels of COMS/MI to obtain a model. IR1, IR2, and WV bands are not dependent on day and night. The SWIR band was excluded because of its sunlight dependence. From the result of the correlation comparison between VIS and the IR1, IR2, WV bands, we chose IR1 as a pair with the VIS band because it had the highest correlation among the IR1, IR2, and WV bands.
For training, the input patches were cropped to a size of 1024 × 1024 pixels with a batch size of 2192. The data sets were COMS/MI VIS and IR1 images from January 1, 2012, to December 31, 2017, for the winter season and from June 1, 2012, to August 31, 2017, for the summer season. The time of all the data was selected as 04:00 UTC (daytime, 13:00 in KST) to maximize the sunlight effect. Thus, G uses 1024 × 1024 pixels with a batch of 2192 VIS-channel daytime reflectance images, while D uses 1024 × 1024 pixels with a batch of 2192 IR1-channel daytime radiance images. A batch of 2192 corresponds to 85.86% of the entire data set of 2553.
For validation, the data sets are COMS/MI VIS and IR1 images at 04:00 UTC (13:00 KST) from December 1, 2017, to February 28, 2018, for the winter season and from June 1, 2018, to August 31, 2018, for the summer season. Pix2Pix uses 1024 × 1024 pixels with a batch of 361 pairs of daytime VIS reflectance and IR1 radiance image data sets. A batch of 361 corresponds to 14.14% of the entire data set of 2553.
For application of our model, we established data sets of COMS/MI VIS and IR1 images at 16:00 UTC (nighttime, 01:00 KST) from January 1 to December 31 in 2018. During this process, the G of our model was trained to minimize the mean error between a VIS reflectance (VIS) and an AI-generated VIS reflectance (AI-VIS) and reproduce the true data distribution of VIS reflectance from the corresponding IR1 radiance images (IR). The D of our model was trained to distinguish the real pair (IR, VIS) from the AI-generated pair (IR, AI-VIS).
Our experiment was implemented on TensorFlow with Python 3.54 under Linux UBUNTU16.04.5, CUDA9.0, and CUDNN7.4.1.5 systems with four NVIDIA Titan-XP D5 GPU and an Intel Xeon CPU and took approximately 12 h for 389 epochs for 500,000 iterations. Figure 2 shows the outline of our model procedure. IR1 and VIS are the daytime images observed in the COMS IR1 and VIS bands, respectively. AI-VIS indicates the CGAN-generated virtual daytime image for the COMS VIS band using real IR1 daytime images. IR1‘ indicates other IR1 observational images during the night. AI-VIS‘, which does not exist in real COMS observation, indicates AI-generated nighttime reflectance images generated from our model.

4. Results

Figure 3 shows the representative images of VIS reflectance, IR1 radiance, IR2 radiance, and WV radiance observed from the COMS/MI sensor on June 1, 2017, 04:00 UTC (13:00 KST, daytime). Table 2 summarizes the correlation coefficients between the VIS image and other band images for one day per month in 2017. The IR1 band shows the highest correlation coefficient value, with the VIS band compared to that of the other bands throughout all seasons. The IR2 band shows a comparable but smaller correlation coefficient compared to the IR1 band. The IR1 and IR2 bands have relatively low correlation coefficients compared to the VIS band during winter and relatively high correlation coefficients during other seasons. The WV band shows lower correlation coefficients than those of the IR1 and IR2 bands with VIS band. The WV band shows negative correlation values with the VIS band, in particular during the winter season. The SWIR band shows the lowest correlation coefficients among the IR1, IR2, WV, and SWIR bands with the VIS band. The SWIR band also shows negative correlation values with the VIS band during the winter season similar to those of the WV band. Thus, we chose the IR1 channel as a pair with the VIS channel for CGAN training model in this study.
Figure 4. shows the results of the best loss values to implement our model. The best iteration was approximately 11,0000 for the AI-generated reflectance in the COMS VIS band given the CGAN loss. Figure 4b shows the variations in CC and RMSE according to the iteration numbers. In this study, approximately, 80,000 iterations of our model show a maximum CC value 0.895 and a minimum RMSE value of 33.58). Thus, we adopted 80,000 iterations to obtain the best performance for our CGAN model to generate the AI-generated reflectance in the COMS VIS band.
Figure 5. The results of our model. Figure 5a,b show a COMS-observed real daytime reflectance in the COMS VIS (0.675 μm) band and a daytime radiance image in the COMS IR1 (10.8 μm) band on August 22, 04:00 UTC (13:00 KST), respectively. Figure 5c shows the AI-generated daytime reflectance image at the same time as those in Figure 5a,d show the difference image between the COMS-observed real daytime reflectance and the AI-generated daytime reflectance. The real COMS reflectance image and AI-generated reflectance image agree well except for near the typhoon area of high clouds which appear white. In this case, our models were implemented using 80,000 iterations.
Figure 6. The statistical results of our model. Figure 6a,b show the scatterplots between the COMS VIS reflectance and AI-generated VIS reflectance during winter (December 2017 to February 2018) and summer (June 2018 to August 2018), respectively. The bias, RMSE, and CC are −2.41, 36.85, and 0.88 during summer but −0.21, 33.02, and 0.89 during winter, respectively. In general, the bias, RMSE, and CC values are relatively higher during summer than during winter. This seasonal difference may be because of the vertical distribution of clouds, which occurs because more high clouds form during summer than during winter. The noisy vertical line with x = 225 is originated from the dotted lines of latitude/longitude and solid borderlines on the COMS VIS original image provided by the Korean Meteorological Administration (KMA)/National Meteorological Satellite Center (NMSC), which cannot be removed in this study.
Figure 7a,b show the daily variation in CC and RMSE values between real COMS VIS reflectance and AI-generated reflectance (daytime) during winter (January 2018) and summer (August 2018), respectively. The daily variation in CC ranges from approximately 0.85 to 0.93 during both seasons. The RMSE values are between 28 and 38 during January 2018 and 30 and 40 during August 2018. This result can also be interpreted as a result of the daily variation in the vertical distribution of clouds.
Figure 8a,b show the real VIS observation and AI-generated VIS reflectance images during the nighttime (October 22, 2018, 17:00 UTC (October 23, 02:00 KST), respectively. At this time, there is no real COMS VIS reflectance because there is no sunlight. Otherwise, our AI-generated VIS nighttime reflectance image would show the characteristics of clouds from the real daytime VIS reflectance.

5. Summary and Conclusions

VIS and IR bands on many geostationary satellites have been crucial for weather analysis, nowcasting, and forecasting at high spatial and temporal resolutions during the past few decades. In VIS, band images, in particular, are useful for analyzing clouds and weather because of their intuitive understanding, which is similar to that of human eyes. However, the VIS band observation is only available during the day because it primarily observes the reflectance of sunlight off the Earth, while IR bands observe the energy emitted from the Earth without depending on sunlight.
In this study, we proposed a unique method to generate non-existent nighttime VIS satellite images using the CGAN technique, one of the deep learning techniques. We translated from COMS IR images to VIS images using a deep learning model based on CGAN, one of the best performance methods for image translation. For our CGAN-based model development and training, we used daytime image data sets of reflectance in the COMS/MI VIS (0.675 μm) band and radiance in the IR1 (10.8 μm) band of the COMS/MI sensor over 5 years (from 2012 to 2017) for the summer and winter seasons, separately. For validation, we used the daytime image data sets of reflectance in the COMS/MI VIS (0.675 μm) band and radiance in the IR1 (10.8 μm) band of the COMS/MI sensor over 2 years (2018) for the summer and winter seasons, separately. For training, the input patches were cropped to a size of 1024 × 1024 pixels with a batch size of 2192. For validation, the data sets were COMS/MI VIS and IR1 images at every 04:00 UTC (13:00 KST) 1024 × 1024 pixels in size with a batch of 361 pairs of daytime VIS reflectance and IR1 radiance image data sets. From the correlation analysis among the COMS VIS, SWIR, WV, IR1, and IR2 bands, we found the VIS and IR1 bands are the best pair for training and validation. We used Pix2Pix to process the pairs of daytime reflectance images and brightness temperature image data sets with 8 bits in the VIS and IR1 bands of COMS/MI using TensorFlow with Python 3.54 under Linux systems with four NVIDIA Titan-XP D5 GPUs and an Intel Xeon CPU. The best iteration of our model was determined as approximately 80,000 using the series of CC depending on the iteration number. Finally, we presented the AI-generated nighttime reflectance in the VIS band, which cannot be observed from satellites. AI-generated VIS images show relatively low CC and RMSE values during summer compared to those during the winter season because of the wide range of the vertical distribution of clouds.
Our model successfully produces AI-generated VIS images from IR images, which provides a very high correlation and RMSE between the two sets of VIS and IR images. Now, we can monitor weather at night using IR images as well as AI-generated VIS images. This result can be useful to a variety of meteorological applications such as fog, cloud, and typhoon analysis for operational and research purposes.

Author Contributions

Conceptualization, S.H. and Y.-J.M.; methodology, S.H. and K.K.; software, T.K. and Y.-J.M.; validation, K.K., Y.K., and J.-H.K.; formal analysis, K.K., J.-H.K., Y.K., and S.H.; investigation, K.K., Y.K., and S.H.; resources, S.H. and Y.-J.M.; data curation, K.K., E.P., and G.S.; writing—original draft preparation, K.K. and S.H.; writing—review and editing, S.H.; visualization, K.K. and J.-H.K.; supervision, S.H. and Y.-J.M.; project administration, S.H. and Y.-J.M.; funding acquisition, S.H. and Y.-J.M..

Funding

This research was funded by the Korea Meteorological Administration (KMA)’s Research and Development Program, grant number KMI2018-05710, the Korea Ministry of Environment (KME)’s Korea Environment Industry & Technology Institute (KEITI) through Advanced Water Management Research Program, grant number 83079, the National Research Foundation (NRF)’s Basic Science Research Program, grant number NRF-2016R1A2B4013131 & NRF-2019R1A2C1002634, the Korea Astronomy and Space Science Institute (KASI) under the Research and Development Program ‘Study on the Determination of Coronal Physical Quantities using Solar Multi-wavelength Images, project number 2019-1-850-02, and the Ministry of Science, ICT and Future Planning (MSIP)’s Institute for Information & communications Technology Promotion (IITP) grant, grant number 2018-0-01422.

Acknowledgments

The authors thank anonymous reviewers for helpful and constructive comments on the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Purdom, J.; Menzel, P. Evolution of satellite observation in the United States and their use in meteorology. In Proceedings of the Historical Essays on Meteorology 1919–1995; Fleming, J.R., Ed.; American Meteorological Society: Boston, MA, USA, 1996; pp. 99–156. [Google Scholar]
  2. Schmetz, J.; Pili, P.; Tjemkes, S.; Just, D.; Kerkmann, J.; Rota, S.; Ratier, R. Supplement to an introduction to Meteosat Second Generation (MSG). Bull. Am. Meteorol. Soc. 2002, 83, 991. [Google Scholar] [CrossRef]
  3. Coordination Group for Meteorological Satellites (CGMS), 2007. CGMS global contingency plan, WMO Space Programme. Available online: http://www.wmo.int/pages/prog/sat/documents/CGMS_Contingency-Plan-2007.pdf (accessed on 20 June 2019).
  4. Bessho, K.; Date, K.; Hayashi, M.; Ikeda, A.; Imai, T.; Inoue, H.; Kumagai, Y.; Miyakawa, T.; Murata, H.; Ohno, T.; et al. An introduction to Himawari-8/9—Japan’s new-generation geostationary meteorological satellites. J. Meteor. Soc. Jpn. 2016, 94, 151–183. [Google Scholar] [CrossRef]
  5. Schmit, T.J.; Gunshor, M.M.; Menzel, W.P.; Gurka, J.J.; Li, J.; Bachmeier, A.S. Introducing the next-generation advanced baseline imager on GOES-R. Bull. Am. Meteorol. Soc. 2005, 86, 1079–1096. [Google Scholar] [CrossRef]
  6. Cooperative Institute for Research in the Atmosphere (CIRA). Introduction to GOES-8. Available online: http://rammb.cira.colostate.edu/training/tutorials/goes_8_original/default.asp (accessed on 20 May 2019).
  7. Lee, J.-R.; Chung, C.-Y.; Ou, M.-L. Fog detection using geostationary satellite data: temporally continuous algorithm. Asia-Pac. J. Atmos. Sci. 2011, 47, 113–122. [Google Scholar] [CrossRef]
  8. Woo, H.-J.; Park, K.-A.; Li, X.; Lee, E.-Y. Sea surface temperature retrieval from the first Korean geostationary satellite COMS data: validation and error assessment. Remote Sens. 2018, 10, 1916. [Google Scholar] [CrossRef]
  9. National Meteorological Satellite Center (NMSC). Available online: http://nmsc.kma.go.kr (accessed on 8 October 2018).
  10. Isola, P.; Zhu, J.Y.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  11. Lin, Y.-C. pix2pix-tensorflow, 2017. Available online: https://github.com/yenchenlin/pix2pix-tensorflow (accessed on 20 March 2019).
  12. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. Available online: https://arxiv.org/abs/1511.06434 (accessed on 17 July 2019).
  13. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. arXiv 2014, arXiv:1406.2661, 2672–2680. Available online: https:// arxiv.org/abs/1406.2661 (accessed on 17 July 2019).
  14. Nguyen, V.; Vicente, T.F.Y.; Zhao, M.; Hoai, M.; Samaras, D. Shadow detection with conditional generative adversarial networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
  15. Michelsanti, D.; Tan, Z.H. Conditional generative adversarial networks for speech enhancement and noise-robust speaker verification. In Proceedings of the INTERSPEECH 2017, 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, 20–24 August 2017; Available online: https://arxiv.org/abs/1709.01703 (accessed on 17 July 2019).
Figure 1. Examples of Communication, Ocean, and Meteorological Satellite (COMS) Meteorological Imager (MI) observations. (a) Reflectance in the visible (VIS) band and (b) radiance in the infrared 1 (IR1) band on August 1, 2018 (summer). (c) Reflectance in the VIS band and (d) radiance in the IR1 band on January 1, 2018 (winter). Time is 04:00 UTC (13:00 KST, daytime) for all the images.
Figure 1. Examples of Communication, Ocean, and Meteorological Satellite (COMS) Meteorological Imager (MI) observations. (a) Reflectance in the visible (VIS) band and (b) radiance in the infrared 1 (IR1) band on August 1, 2018 (summer). (c) Reflectance in the VIS band and (d) radiance in the IR1 band on January 1, 2018 (winter). Time is 04:00 UTC (13:00 KST, daytime) for all the images.
Remotesensing 11 02087 g001
Figure 2. Model structure of our research.
Figure 2. Model structure of our research.
Remotesensing 11 02087 g002
Figure 3. Images of (a) VIS reflectance, (b) water vapor (WV) radiance, (c) IR1 radiance, and (d) IR2 radiance observed from COMS/MI on June 1, 2017, 04:00 UTC (daytime).
Figure 3. Images of (a) VIS reflectance, (b) water vapor (WV) radiance, (c) IR1 radiance, and (d) IR2 radiance observed from COMS/MI on June 1, 2017, 04:00 UTC (daytime).
Remotesensing 11 02087 g003
Figure 4. (a) Values of loss G, loss D, and loss L1 as a function of iterations for artificial intelligence (AI)-generated reflectance. (b) Variations in correlation coefficient (CC) and root mean square error (RMSE) between the real VIS reflectance and AI-generated VIS reflectance.
Figure 4. (a) Values of loss G, loss D, and loss L1 as a function of iterations for artificial intelligence (AI)-generated reflectance. (b) Variations in correlation coefficient (CC) and root mean square error (RMSE) between the real VIS reflectance and AI-generated VIS reflectance.
Remotesensing 11 02087 g004
Figure 5. (a) Real VIS reflectance, (b) real IR1 radiance, (c) AI-generated VIS reflectance, and (d) difference between real VIS reflectance and AI-generated VIS reflectance. In this case, our model was trained with 80,000 iterations. The observation date and time is August 22, 2018, 04:00 UTC (daytime).
Figure 5. (a) Real VIS reflectance, (b) real IR1 radiance, (c) AI-generated VIS reflectance, and (d) difference between real VIS reflectance and AI-generated VIS reflectance. In this case, our model was trained with 80,000 iterations. The observation date and time is August 22, 2018, 04:00 UTC (daytime).
Remotesensing 11 02087 g005aRemotesensing 11 02087 g005b
Figure 6. Scatterplots and statistical results of AI-generated reflectance during (a) summer (June 2018 to August 2018) and (b) winter (December 2017 to February 2018).
Figure 6. Scatterplots and statistical results of AI-generated reflectance during (a) summer (June 2018 to August 2018) and (b) winter (December 2017 to February 2018).
Remotesensing 11 02087 g006
Figure 7. Time series of CC and RMSE between real VIS reflectance and AI-generated reflectance (daytime) during (a) winter (January 2018) and (b) summer (August 2018).
Figure 7. Time series of CC and RMSE between real VIS reflectance and AI-generated reflectance (daytime) during (a) winter (January 2018) and (b) summer (August 2018).
Remotesensing 11 02087 g007
Figure 8. (a) No VIS observation and (b) AI-generated nighttime VIS reflectance image with 80,000 iterations on October 22, 2018 17:00 UTC (October 23, 02:00 KST, nighttime). At this time, there is no real VIS reflectance.
Figure 8. (a) No VIS observation and (b) AI-generated nighttime VIS reflectance image with 80,000 iterations on October 22, 2018 17:00 UTC (October 23, 02:00 KST, nighttime). At this time, there is no real VIS reflectance.
Remotesensing 11 02087 g008
Table 1. Characteristics of bands of the Communication, Ocean, and Meteorological Satellite (COMS) Meteorological Imager (MI) sensor.
Table 1. Characteristics of bands of the Communication, Ocean, and Meteorological Satellite (COMS) Meteorological Imager (MI) sensor.
BandWavelength (μm)Bandwidth (μm)Spatial Resolution (km)Applications
VIS0.6750.55–0.801Cloud images, Asian dust, forest fires, fog observation, atmospheric motion vector
SWIR3.753.5–4.04Night fog and low-level clouds, forest fire detection, land surface temperature
WV6.756.5–7.04Observation of mid and upper atmospheric humidity and upper atmospheric motions
IR110.810.3–11.34Cloud information, sea surface temperature, Asian dust observation
IR212.011.5–12.54Cloud information, sea surface temperature, Asian dust observation
Table 2. Correlation coefficients among the COMS visible (VIS), water vapor (WV), infrared 1 (IR1), and IR2 channels.
Table 2. Correlation coefficients among the COMS visible (VIS), water vapor (WV), infrared 1 (IR1), and IR2 channels.
CasesIR1IR2WVSWIR
2017.01.15. 04:00 UTC0.31670.3122−0.519−0.2084
2017.02.15. 04:00 UTC0.62420.61980.2908−0.1927
2017.03.15. 04:00 UTC0.57960.56580.1759−0.2424
2017.04.15. 04:00 UTC0.68620.68640.6154−0.0384
2017.05.15. 04:00 UTC0.73180.72020.49070.2068
2017.06.15. 04:00 UTC0.73230.72910.66340.3607
2017.07.15. 04:00 UTC0.70290.69160.50460.1951
2017.08.15. 04:00 UTC0.71840.70250.45530.0786
2017.09.15. 04:00 UTC0.71520.69860.50910.1875
2017.10.15. 04:00 UTC0.80370.78360.46470.3349
2017.11.15. 04:00 UTC0.65020.64110.3021−0.0288
2017.12.15. 0400 UTC0.39480.3824−0.0794−0.1042

Share and Cite

MDPI and ACS Style

Kim, K.; Kim, J.-H.; Moon, Y.-J.; Park, E.; Shin, G.; Kim, T.; Kim, Y.; Hong, S. Nighttime Reflectance Generation in the Visible Band of Satellites. Remote Sens. 2019, 11, 2087. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11182087

AMA Style

Kim K, Kim J-H, Moon Y-J, Park E, Shin G, Kim T, Kim Y, Hong S. Nighttime Reflectance Generation in the Visible Band of Satellites. Remote Sensing. 2019; 11(18):2087. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11182087

Chicago/Turabian Style

Kim, Kimoon, Ji-Hye Kim, Yong-Jae Moon, Eunsu Park, Gyungin Shin, Taeyoung Kim, Yerin Kim, and Sungwook Hong. 2019. "Nighttime Reflectance Generation in the Visible Band of Satellites" Remote Sensing 11, no. 18: 2087. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11182087

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop