Next Article in Journal
Prediction of Sea Surface Temperature by Combining Interdimensional and Self-Attention with Neural Networks
Next Article in Special Issue
Assessing the Accuracy of Landsat Vegetation Fractional Cover for Monitoring Australian Drylands
Previous Article in Journal
Integrating Remote Sensing and Spatiotemporal Analysis to Characterize Artificial Vegetation Restoration Suitability in Desert Areas: A Case Study of Mu Us Sandy Land
Previous Article in Special Issue
Reflectance Anisotropy from MODIS for Albedo Retrieval from a Single Directional Reflectance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Vegetation Leaf-Area-Index Dynamics from Multiple Satellite Products through Deep-Learning Method

1
Center of Digital Mountain and Remote Sensing Application, Institute of Mountain Hazards and Environment, Chinese Academy of Sciences, Chengdu 610041, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Wanglang Mountain Remote Sensing Observation and Research Station of Sichuan Province, Mianyang 621000, China
4
Key Laboratory of Resources and Environmental Information System (LREIS), Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101, China
5
Land Satellite Remote Sensing Application Center, Ministry of Natural Resources (MNR), Beijing 100048, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(19), 4733; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194733
Submission received: 12 August 2022 / Revised: 8 September 2022 / Accepted: 11 September 2022 / Published: 22 September 2022
(This article belongs to the Special Issue Remote Sensing for Surface Biophysical Parameter Retrieval)

Abstract

:
A high-quality leaf-area index (LAI) is important for land surface process modeling and vegetation growth monitoring. Although multiple satellite LAI products have been generated, they usually show spatio-temporal discontinuities and are sometimes inconsistent with vegetation growth patterns. A deep-learning model was proposed to retrieve time-series LAIs from multiple satellite data in this paper. The fusion of three global LAI products (i.e., VIIRS, GLASS, and MODIS LAI) was first carried out through a double logistic function (DLF). Then, the DLF LAI, together with MODIS reflectance (MOD09A1) data, served as the training samples of the deep-learning long short-term memory (LSTM) model for the sequential LAI estimations. In addition, the LSTM models trained by a single LAI product were considered as indirect references for the further evaluation of our proposed approach. The validation results showed that our proposed LSTMfusion LAI provided the best performance (R2 = 0.83, RMSE = 0.82) when compared to LSTMGLASS (R2 = 0.79, RMSE = 0.93), LSTMMODIS (R2 = 0.78, RMSE = 1.25), LSTMVIIRS (R2 = 0.70, RMSE = 0.94), GLASS (R2 = 0.68, RMSE = 1.05), MODIS (R2 = 0.26, RMSE = 1.75), VIIRS (R2 = 0.44, RMSE = 1.37) and DLF LAI (R2 = 0.67, RMSE = 0.98). A temporal comparison among LSTMfusion and three LAI products demonstrated that the LSTMfusion model efficiently generated a time-series LAI that was smoother and more continuous than the VIIRS and MODIS LAIs. At the crop peak growth stage, the LSTMfusion LAI values were closer to the reference maps than the GLASS LAI. Furthermore, our proposed method was proved to be effective and robust in maintaining the spatio-temporal continuity of the LAI when noisy reflectance data were used as the LSTM input. These findings highlighted that the DLF method helped to enhance the quality of the original satellite products, and the LSTM model trained by the coupled satellite products can provide reliable and robust estimations of the time-series LAI.

1. Introduction

Leaf-area index (LAI) is usually defined as the total one-side green leaf area per unit of the horizontal ground surface area [1]. Compared with the widely used vegetation indices (e.g., Normalized Difference Vegetation Index, NDVI), the vegetation biophysical parameter—LAI—is more closely related to physiological processes of vegetation, such as photosynthesis and transpiration. Estimates of time-series LAIs are essential for earth greening [2], land surface model simulations [3], and vegetation-dynamics monitoring [4]. Satellite remote sensing provides an effective way to estimate global or regional LAI in time series.
Global LAI products, such as VIIRS [5], GEOV2 [6], GLASS [7,8], and MODIS [9] LAIs, were produced from satellite data through multiple retrieval approaches [10]. These LAI products provided important information for vegetation-dynamics research, especially on the global scale [2]. It was worth noting that some products (e.g., MODIS and VIIRS LAI) suffered from spatial gaps and temporal discontinuities because of cloud contaminations, satellite sensors, and retrieval algorithms, etc. In contrast, GLASS and GEOV2 LAI displayed spatio-temporally continuous profiles of vegetation dynamics. They were actually “fused” products that were trained based on CYLOPES [11] and MODIS LAI using the artificial neural network (ANN) for GEOV2 and the general regression neural network (GRNN) for GLASS. Different products were generated using various models and algorithms that usually made different physical assumptions. Therefore, the product-specific approaches easily caused evident inconsistencies among multiple LAI products [12]. It is essential to explore a new retrieval method for time-series LAI estimations with higher accuracy than the current LAI products.
Although several attempts were carried out to fill spatial gaps and smooth LAI time series through an integrated filter algorithm [13,14] or a multi-sensor fusion method [15] for the sequential LAI estimates, the performance of the improved LAI was still unsatisfactory due to the limitations of the quality of the original products. Worth noting was that these attempts did not incorporate the temporal evolution of changes in vegetation dynamics for further LAI estimations. As a matter of fact, the incorporation of this prior information is conducive to improving the spatio-temporal continuity and accuracy of LAI retrieval [16,17]. For example, data-assimilation methods are effective for improving time-series LAI estimations through the combination of vegetation-dynamics models and instantaneous satellite images [18,19]. Unfortunately, data assimilation is generally complex and time-consuming [20], depending on vegetation-dynamics models for the temporal evolution of the LAI [21]. Moreover, the error propagation and uncertainties of the model and observations strictly hamper the performance of the data assimilation [22], which results in adverse effects on the fast mapping of the time-series LAI.
As the most promising and cutting-edge technology in the machine-learning field, deep learning, which has been successfully applied to computer vision, image recognition, and information extraction, has attracted more and more attention in data-driven earth system science [23]. A deep-learning algorithm—long short-term memory (LSTM, [24])— can deal with the long nonlinear relationship dependencies of sequence data evolution at the current and previous status, which provides a potential for time-series modeling and temporal feature extraction. Given the inherent changes of natural vegetation dynamics, LSTM is expected to be an attractive alternative to improve the accuracy and spatio-temporal integrity of LAI estimates from multitemporal satellite images. However, few studies used the LSTM algorithm to retrieve the LAI from remotely sensed observations [25,26,27]. Zhang et al. [25] utilized the LSTM model to retrieve the time-series LAI, providing a methodological reference for LAI-retrieval studies. Long et al. [26] effectively predicted the LAI of winter wheat. However, the predicted result was validated based on simulated data rather than field, measured data. The inputs are usually land surface reflectance data and their derived vegetation indices [28], and the output response is the LAI from field measurements or satellite products. Since the transferability and extrapolation of the trained LSTM model are limited when ground-based LAI measurements are taken as the output [25], satellite LAI products can be selected as the output responses for LSTM training in order to conveniently estimate sequential LAI dynamics at a regional scale. High-quality training data is critical for LSTM performance. Given the existing issues of the current LAI products mentioned above, it is expected that the fusion of multiple satellite products may improve LSTM performance for the estimation of LAI dynamics using multitemporal satellite images. Additionally, the noise in reflectance data caused by clouds or aerosols may influence the time-series retrieval. Therefore, it is necessary for the LSTM retrieval model to quantitatively analyze the sensitivity of noise reflectance inputs.
Recently, a study by Ma et al. [27] has generated time-series LAI products by combining deep learning and multiple LAI products. The authors trained the LSTM model and other deep-learning models using the samples from three LAI products, but they neither performed a comparison with the LSTM models based on a single LAI product nor a quantitative analysis of the resistance ability to the persistent cloud or aerosol contaminations. Since that the quality of the training data has a great influence on the retrieval model, it is essential to compare and analyze the results of LSTM models based on different inputs and output responses for training. In addition, Ma et al. [27] compared the theory performances of four deep-learning methods, but did not validate the retrieval results of the LSTM models with the “true” reference LAI. The proposed method is applied at a regional scale and aims to: (1) validate the accuracy of time-series LAI estimations through the integration of LSTM and multiple satellite products over a regional scale, rather than global scales, which has potential for local applications (e.g., precision agriculture and crop yield estimation); (2) compare the performance of the LSTM model when different LAI products serve as the training response; (3) quantitatively analyze the sensitivity of our proposed approach to noisy reflectance inputs.

2. Study Area and Data

2.1. Study Area

The dominating crop region, which is located in Hailun (47°24′–47°26′N, 126°47′–126°51′E), Heilongjiang province, northeastern China (Figure 1), was selected. The altitude is approximately 200–240 m above sea level. The local climate is a cold temperate continental monsoon, with yearly precipitation ranging from 500 mm to 600 mm and a mean annual temperature of 2 °C. Sorghum, maize, and soybean are the main crop types. They show similar growing patterns, with planting in May and harvesting in September.

2.2. Data

2.2.1. Fine LAI Reference Maps

Fang et al. [29] provided eleven 30 m-resolution LAI maps ([30], Figure 1c). Each map is about 30 km2. Fine-resolution satellite images, including HJ-1 [31], Landsat7 ETM+ [32], and Sentinel-2A [33], were preprocessed to obtain the fine-resolution reflectance data. Then, a look-up table (LUT) was established with the help of the ACRM [34] radiative transfer model. Finally, fine LAI reference maps with the UTM projection were produced using the LUT method on 27 June, 5 July, 14 July, 26 July, 31 July, 2 August, 13 August, 20 August, 29 August, 18 September, and 22 September 2016.
The LAI-2200 canopy analyzer instrument was used to acquire field LAI values weekly at each plot from 20 June to 22 September 2016 (during the crop growth season) over two maize plots, two soybean plots, and one sorghum plot (Table 1). Each plot had the size of 100 m × 500 m, including three 20 m × 20 m sampling units [29]. These field LAI measurements were used to validate the accuracy of the fine-resolution LAI maps. The validation activity indicated that the fine-resolution LAI maps achieved good performance against in situ measurements (the coefficient of determination (R2) was 0.86, room mean square error (RMSE) was 0.70 and bias was 0.15). The fine-resolution LAI maps, called “reference LAI”, were directly aggregated to the MODIS grid as the “true” reference for the validation of our proposed method.

2.2.2. Surface Reflectance Data

The MODIS bidirectional reflectance (MOD09A1) product, 8-day composite reflectance data with 500 m spatial resolution, was obtained from Earth Science Data Systems (ESDS) [35]. It contains seven reflectance bands, solar and viewing zenith angles, and relative azimuth angle between satellite and sun. Moreover, it also provides the quality control (QC) layer. The high-quality reflectance data in the shortwave–infrared, near-infrared, and red bands and the corresponding sun-viewing geometry information were used as the input variables for the LSTM training.

2.2.3. LAI Products

Three LAI products, including the GLASS, MODIS and VIIRS LAIs, were selected in order to explore the ability of our proposed method.
The Global Land Surface Satellite (GLASS version 5) LAI was retrieved using general regression neural networks from the MOD09A1 reflectance data at the resolutions of 500 m and 8 days in the sinusoidal projection from 2000 to the present [8]. This product was downloaded from [36]. Given that the latest version 6 [27] was not available when this study started, our work was limited to the version 5 GLASS LAI.
The MODIS LAI version 6 product (MCD15A2H) was generated using the look-up-table (LUT) technique with the help of a 3D radiative transfer model [9,37]. The LUT method is also called the main algorithm in the MODIS LAI product. When the main algorithm fails, the empirical relationship between NDVI and LAI is triggered for LAI estimations across different biome types. The MCD15A2H product has the temporal resolution of 8 days and the spatial resolution of 500 m in the sinusoidal projection [35]. Moreover, it also provides the QC layer.
The VIIRS LAI (VNP15A2H) [5] with resolutions of 500 m and 8 days was derived from Suomi National Polar-orbiting Partnership (SNPP) observations [38]. The VIIRS algorithm is the same as the MODIS algorithm except for its specific parametrization for LAI retrieval. The VIIRS LAI is also available from [35] in the sinusoidal projection.
It can be seen from the brief descriptions mentioned above that GLASS, MODIS, and VIIRS have several identical characteristics, such as the same spatio-temporal resolutions, geographical projection, and land-cover types for model parameterization, avoiding the effect of co-registration errors among multiple products on the LAI fusion and LSTM performance. Although the MODIS and VIIRS LAIs were retrieved using a similar method, evident differences were found in some regions [29], which may be attributed to specific model parameter adjustments for the VIIRS sensor and the differences in spectral responses between the two sensors [5]. Furthermore, whether fusing the MODIS and VIIRS LAIs contributes to improving LSTM performance will be discussed in Section 4.3. Note that although GEOV2 LAI is already a smoothed and gap-filled fused product [6], the differences in the geographical projections and spatial resolutions between GEOV2 (1/112°) and MODIS (500 m) increase co-registration errors between products, and subsequently may have negative effects on the LAI fusion and LSTM performance. Therefore, the GEOV2 LAI was not used in this study.

3. Methodology

The proposed scheme for time-series LAI estimations using the LSTM deep-learning model is displayed in Figure 2. The multitemporal MOD09A1 reflectance product and the fusion of the GLASS, MODIS, and VIIRS LAIs during 2014–2015 separately served as the inputs and output responses for the LSTM training. We trained the LSTM model to determine nonlinear relationship dependencies of sequential LAI and bi-directional reflectance data at the current and previous status, and then evaluated the theoretical performance based on the training database. Once the LSTM model was built, we used it to time-efficiently estimate time-series LAI values from multitemporal MOD09A1 images in 2016. Finally, the retrieved LAI maps were evaluated through qualitative and quantitative analyses. Additionally, the LSTM models trained by a single LAI product served as an indirect reference to further evaluate our proposed approach. To distinguish different LSTM models, our proposed method was simply referred to as LSTMfusion, whereas the single-product-trained models were separately called LSTMGLASS, LSTMMODIS, and LSTMVIIRS.

3.1. Fusion of Multiple LAI Products

The double logistic function (DLF, Equation (1)) [39] was applied to reconstruct the time-series LAI through the fusion of multiple LAI products. Note that the MODIS and VIIRS LAI products should be filtered by the corresponding QC layer to obtain high-quality data before the fusing process.
LAI = α 1 + α 2 e δ 1 t β 1 α 3 1 + e δ 2 t β 2
where t represents the day of the year (DOY), and α 1 , α 2 , α 3 , δ 1 , δ 2 , β 1 , β 2 are seven parameters of this function. α 1 and α 2 separately control the amplitude of the temporal variation in the LAI. δ 1 and δ 2 are normalized slope coefficients at inflection points in spring and fall, respectively. β 1   and β 2 correspond to the dates of the inflection points (in DOY) in the green-up and senescence periods, respectively.
For each pixel, the values of the three LAI products were fused to fit a vegetation growth curve by tuning the seven parameters. The seasonal variations in the DLF LAI in one pixel from 2014 to 2016 (Figure 3) show that the DLF LAI is smoother than the MODIS and VIIRS LAIs. Meanwhile, the amplitude of the DLF LAI is higher than the counterpart of the GLASS LAI.

3.2. Estimating Time-Series LAI Based on the LSTM Model

3.2.1. LSTM Principle

Compared with the conventional RNN, LSTM solves the issue of exploding or vanishing gradients within memory cells and a gating mechanism [40]. The architecture structure consists of memory cells that hold a current state based on three gates (Figure 4), including forget ( f t ), input ( i t ) and output ( o t ), which control the information about coming, remaining, and exiting in the current cell. x t represents the input vector in time t. c t and c t 1 are the memory from the current and the previous blocks, respectively. Similarly, h t 1 and h t separately represent the outputs of the previous and the current blocks. Sigmoid (σ) and hyperbolic tangent (tanh) are nonlinear active functions for fitting the complex nonlinear relationship. Element-wise multiplication (×) and element-wise concatenation (+) are the operations for vector data in the memory cell. The calculation equations in the memory cell and active functions are as follows:
f t = σ W f × x t , h t 1 + b f
i t = σ W i × x t , h t 1 + b i
o t = σ W o × x t , h t 1 + b o
c t = tan h W c × x t , h t 1 + b c + f t
h t = o t × tan h c t
In the present research, the LSTM network was applied, as shown in Figure 2. The sequential reflectance data served as the input layer. The LSTM layer selectively transported the feature information. Then, the features of the LSTM layer were transformed into the target outputs in the output layer through the two fully connected (FC) layers. The FC layers could extract useful features of LAI retrieval and transport the features to the following layers. Moreover, we introduced a dropout layer with a dropout rate of 0.3 between the two FC layers to alleviate overfitting. Finally, the regression results were obtained from a regression layer. The hyperparameters of the LSTM network were: (a) mini batches size of 50; (b) dropout rate of 0.3; (c) Adam optimizer with an initial learning rate of 0.01; and (d) max epochs of 1200. The values of these hyperparameters were obtained by tuning based on previous similar work, which used LSTM models to map rice crops from Sentinel-1 time series [41].

3.2.2. Time-Series LAI Estimations Using the LSTM Model

The reflectance in the shortwave–infrared band, near-infrared band, red band and the corresponding geometry information (i.e., solar and viewing zenith angle, as well as relative azimuth angle) from MOD09A1 served as the LSTM input layer. Meanwhile, the DLF LAI was regarded as the responding output. To extract the complete vegetation growth characteristics for one year, the sequential DLF LAI and reflectance values during 2014–2015 were used for the LSTM model training (LSTMfusion). Subsequently, the time-series LAI was predicted using the LSTMfusion model combined with the sequential reflectance data in 2016. To evaluate the LSTMfusion model, the LSTMGLASS, LSTMMODIS, and LSTMVIIRS models were also built using the corresponding MOD09A1 and LAI products. Given the irregular and discontinuous characteristics of the original MODIS and VIIRS products, the two products were fitted using the DLF approach before training.

3.3. Assessment of Our Proposed Method

The performance of our designed method was assessed using qualitative and quantitative metrics. Firstly, only the training datasets from 2014 to 2015 were used to evaluate the theoretical performance of the LSTM models. Several quantitative indices (i.e., R2, RMSE) were measured to assess the theoretical performance of the LSTM models using the five-fold cross-validation (CV) method. Then, we used the training data to build the LSTM model and retrieve the LAI values based on the MODIS reflectance in 2016. Finally, the reference LAI maps with the UTM projection in 2016 were converted to the sinusoidal projection and aggregated to a 500 m resolution through averaging, and were considered the “true” reference for the validation of our proposed method. The aggregated LAI reference values were extracted if more than 70% of the area was covered by high-resolution images in one 500 m-resolution grid. Additionally, the DLF, LSTMGLASS, LSTMMODIS, LSTMVIIRS, GLASS, MODIS, and VIIRS LAIs close to the dates of the reference maps were chosen for a separate accuracy assessment against the reference LAI, which served as an indirect reference for further comparison and analysis. According to the information of QC, the values retrieved from the main algorithm were high-quality LAI values. Therefore, only the MODIS LAI with QC < 64 and VIIRS LAI with QC < 18 (i.e., the main algorithm retrieval), which were taken as the high-quality LAI values, were selected for validation. Subsequently, the temporal consistencies of the time-series LSTMfusion LAI were assessed through comparisons with three LAI products. Lastly, the LSTMfusion LAI mapping in time series was briefly shown.

4. Results

4.1. Theoretical Performance

The theoretical performances of the LSTM models using the samples in 2014 and 2015 are demonstrated in Table 2. LSTMGLASS obtained the best theoretical performance. Furthermore, the LSTMfusion LAI estimations also obtained greater consistency with the DLF LAI when compared to the LSTMMODIS and LSTMVIIRS models. Meanwhile, the worst theoretical performances were found in the LSTMVIIRS model. Generally speaking, all four of the LSTM models had a pretty theoretical performance (R2 > 0.80, RMSE < 0.70), which demonstrated that it is feasible to estimate time-series LAI using the LSTM models coupled with multiple LAI products.

4.2. Independent Validation against Fine-Resolution LAI Reference Maps

As displayed in Figure 5, the accuracy performances of different LAI estimations are in the following order: LSTMfusion LAI (R2 = 0.83, RMSE = 0.82) > LSTMGLASS LAI (R2 = 0.79, RMSE = 0.93) > LSTMVIIRS LAI (R2 = 0.70, RMSE = 0.94) > DLF LAI (R2 = 0.67, RMSE = 0.98) > GLASS LAI (R2 = 0.68, RMSE = 1.05) > LSTMMODIS LAI (R2 = 0.78, RMSE = 1.25) > VIIRS LAI (R2 = 0.44, RMSE = 1.37) > MODIS LAI (R2 = 0.26, RMSE = 1.75). The proposed result (i.e., LSTMfusion LAI) showed the best performance among all the LAI estimations, while the MODIS LAI performed the worst. Furthermore, evident improvements were found in the LSTM estimations when compared to the corresponding original LAI products (i.e., DLF, GLASS, MODIS and VIIRS LAIs).
The estimations from the LSTM models tended to be lower than the reference LAI except for LSTMMODIS during the crop peak growing season. Additionally, all the model estimations showed the overestimation phenomenon before DOY 190. The RMSE and R2 values between the reference and the retrieved LAI values were calculated in separate ground-measured dates (Table 3 and Table 4), which provided an insight into the performance of our proposed approach. It was clear that the LSTMfusion estimations outperformed other LAI values with lower RMSEs at most ground-measured dates. The MODIS and VIIRS LAIs always had poorer performances with more significant variations.

4.3. Effects of the Fused MODIS and VIIRS LAIs on the Accuracy of the LSTM Algorithm

Although the MODIS and VIIRS LAIs were retrieved using a similar method, evident differences were found in our study area. Besides the different validation accuracies (Figure 5f,h, Table 3 and Table 4), the time-series variations in the MODIS and VIIRS LAIs were different during 2014–2016 (Figure 6). This may be attributed to specific model parameter adjustments for the VIIRS sensor and the differences in spectral responses between the two sensors [5]. Meanwhile, the difference between the two products may reveal potential useful information for the LSTM estimations. Consequently, we fused the MODIS and VIIRS LAIs through the DLF method, and then retrained the LSTM model (referred to as LSTMMODIS_VIIRS) based on the fused values. Figure 7 shows the accuracy assessment of the LSTMMODIS_VIIRS values against the reference LAI. The performances of multiple LSTM models are ranked as follows (Figure 5f,h, and Figure 7): LSTMMODIS_VIIRS (R2 = 0.79, RMSE = 0.83) > LSTMVIIRS (R2 = 0.70, RMSE = 0.94) > LSTMMODIS (R2 = 0.78, RMSE = 1.25). This confirms that the combination of the MODIS and VIIRS LAIs is conducive to the accuracy improvement of the LSTM estimations.

4.4. Temporal Analysis of the LSTMfusion LAI

Almost all the LAI time series displayed similar seasonal patterns that increased rapidly during DOY 150–200 and decreased gradually from DOY 250 to DOY 300 over four randomly selected pixels, as shown in Figure 8. However, evident discrepancies between the LSTMfusion values and three LAI products were found during the growth season. The temporal profiles showed that the MODIS and VIIRS LAI products had sudden fluctuations and missing data. On the contrary, the LSTMfusion estimation and the GLASS LAI had smoother and more continuous temporal profiles. Furthermore, compared to the GLASS LAI, the LSTMfusion estimation was closer to the reference LAI during the crop growth season. Worth noting was that all the LAI values slightly overestimated the reference values in the green-up period.

4.5. Spatial Distribution of the LSTMfusion LAI

To qualitatively assess the spatial distributions of the time-series LSTMfusion LAI estimations, twenty LAI maps with a time interval of 8 days during DOY 145–297, in 2016, and at a 500 m spatial resolution are shown in Figure 9. A gradual increase was found for the LAI from DOY 145 to DOY 209. Subsequently, the peak of the retrieved LAI values was obtained around DOY 217 over most regions of our study area, and then the LAI values steadily declined. Overall, the LSTMfusion LAI evolved reasonably over time and presented cyclical phenological patterns that fit the actual crop growth rule.

5. Discussion

5.1. Performance of the Proposed Approach

The estimated results of Zhang et al. [25] were limited by the simulated data that were used to build the LSTM model. Multiple satellite products made it feasible to retrieve the time-series LAI based on deep learning. We explored the potential of the LSTM model to retrieve the time-series LAI by using multiple satellite products. The results indicated that our proposed model had a high accuracy against the reference LAI (R2 = 0.83, RMSE = 0.82). The excellent performance implied a practical approach for time-series LAI estimations. Compared with previous work [27], the results also proved the potential of the LSTM algorithm for retrieving the time-series LAI based on a fused LAI. Additionally, apparent improvements in the LAI quality were found in the LSTM models when compared to the corresponding LAI products. It indicated that the LSTM model could improve the qualities of global LAI products at a regional scale. The LSTM algorithm considered information propagation from the previous status to the current status, and thus obtained a more continuous and robust time-series LAI. The performance differences in the LSTM models with different LAIs as the training output demonstrated that the LSTM estimations were affected by the quality of the training LAI data. The error of the LSTM estimations during the green-up and peak growth stages might be explained by the systematic underestimation of the GLASS LAI [29].
Poor performances were found in the VIIRS and MODIS LAIs, which may be attributed to the retrieval algorithms, instrumentation problems, and cloud contaminations [42]. In contrast, the GLASS LAI showed a smoother pattern that benefited from the gap-filling of the satellite reflectance data but resulted in a smaller value when compared to the reference LAI [29]. Furthermore, the fusion of various LAI products could enhance the continuity and accuracy of the LAI, as displayed in Figure 5e–h, which contributed to the improvement of our proposed model. A similar finding was shown in the work of Ma et al. [27], but the proposed method was validated using the reference “true” LAI over the regional scale.

5.2. Sensitivity of the Proposed Approach on Noisy Reflectance Inputs

The sensitivity of the LSTM algorithm based on multiple sets of remotely sensed data has been reported by Ma et al. [27], but was not quantitatively analyzed. To further assess the effects of the reflectance data quality on our proposed method, we artificially added noise to the reflectance data. The noisy reflectance data contaminated by cloud and shadow were separately set as constants of 0.6 or 0.03 in the training dataset. We assumed that 10%, 30%, and 50% of pixels were randomly contaminated. Various noisy ratios of days in each case were set from zero to one in the training dataset during the vegetation growth seasons (from DOY 121 to DOY 321). The RMSEs between LSTMfusion and the reference LAI in 2016 are shown in Figure 10. As the number of contaminated pixels increased, the RMSE of the LSTMfusion estimations increased, which indicated that the noisy reflectance from the training data impacted the retrieval accuracy of the LAI. Nevertheless, even when affected by more prominent noise, the retrieval results of our proposed method still had a better performance when compared to the LSTMGLASS, LSTMMODIS, LSTMVIIRS LAIs and three LAI products. This confirmed the effectiveness and robustness of our proposed method.
In addition to adding noisy reflectance during the LSTM training, artificial reflectance noise was added to the validation dataset in 2016 for the further investigation of our method. Two cases were designed based on a contaminated training set (30% pixel with 10% noisy DOYs from 2014 to 2015). One assumed that a 10% ratio of pixels were contaminated with different ratios of noisy DOYs, and the other took various proportions of pixels that contained a 10% ratio of noisy DOYs in the validation dataset. Figure 11a shows that the performance of the LSTMfusion model became poorer as the proportion of noisy DOYs increased in the 10% pixels. This might be attributed to the lack of reasonable temporal information of the reflectance data when the proportion of noisy DOYs increased. Note that the LSTMfusion estimations based on fewer than 30% noisy DOYs still achieved a higher accuracy when compared with the GLASS, MODIS, and VIIRS LAIs (R2 > 0.75, RMSE < 0.95). In contrast, Figure 11b shows that our proposed method is relatively insensitive to different ratios of noisy pixels, and even obtains a better accuracy with the low RMSE (<0.86) and the high R2 (>0.80) values. These phenomena demonstrate that our proposed method is immune to the contaminated reflectance to some extent, and robust in maintaining the spatio-temporal continuity of the LAI with noisy reflectance.

5.3. Limitations and Prospects

A limitation was associated with spatial heterogeneity. Although the study area was mainly covered by crops, no-vegetation regions, such as villages and roads, remained. The model estimations reflected the comprehensive influence of vegetation and no-vegetation types in a 500 m resolution. The spatial heterogeneity would introduce considerable uncertainties and affect the independent validation based on the reference LAI. Although the proposed method improved the performance of the time-series LAI, it retained the bias for low DOYs, meaning that systematic errors (i.e., the overestimation at the crop green-up stage) of satellite products are retained by the model. Another limitation was related to the spatio-temporal correlation. Since the distribution of vegetation was clustering, the LAI was expected to form clusters. The stochastic clustering was linked to high correlation values. Therefore, a strong spatial correlation existed in the LAI pixels [43]. How to generate representative time-series LAI training samples is worthy of further study.
Deep-learning methods could efficiently retrieve the LAI, but they are considered “black boxes” without physical laws that cannot help to explain the causes of the retrieved results [10,44]. At present, the attention mechanism [45] has been developed to improve the interpretability of deep learning, and thus it is feasible to incorporate the attention mechanism into the LSTM model for time-series LAI estimations. Additionally, a Hurst index, which was used to describe the so-called Hurst phenomenon [46], could be introduced to quantify and analyze the inherent uncertainty of vegetation dynamics [47].
Moreover, the LSTM algorithm was only conducted for time-series LAI estimations over single-season crops. Given that the seasonal characteristics and growth rules for various vegetation types are different, our proposed method should be tested across other vegetation types with low seasonality (e.g., evergreen needleleaf forest) or two-seasons in one year. Since the DLF has a poor performance for regions with more than one growing season per year [48], other time series reconstruction methods could be adopted in further work.

6. Conclusions

The LSTM algorithm was applied to retrieve the time-series LAI at a regional scale using multiple LAI products and the land surface reflectance product. Our proposed method not only accurately estimated the LAI values, but also improved the spatio-temporal continuity of the LAI. The fusion of multiple LAI products demonstrated that the DLF technique enhanced the quality of the original products. The performances of the LSTM models trained by a single LAI product confirmed the effectiveness of the LSTM model for the time-series LAI. Meanwhile, it demonstrated that the quality of the output variable influenced the model accuracy. Additionally, the robustness of the proposed model for the contaminated reflectance data was confirmed for time-series LAI retrieval.
As more and more satellite products (e.g., LAI, reflectance) become available, it is necessary to generate a blended and improved LAI product out of the existing ones through the integration of deep-learning algorithms and remotely sensed observations. It is feasible to combine multiple satellite products in the LSTM algorithm to produce time-series LAIs at the regional scale. In addition, the proposed method provides a potential for the estimation of other vegetation parameters from time-series satellite observations across other regions.

Author Contributions

H.J. conceived the experiments; T.L. performed the experiments, and wrote the paper; H.F. implemented the field experiments; A.L., H.F., H.J., X.X., D.W. and X.N. reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported jointly by National Natural Science Foundation of China (42071352), National Key Research and Development Program of China (2020YFA0608702), and Chinese Academy of Sciences ‘Light of West China’ Program.

Data Availability Statement

The field measured and high-resolution reference LAI data are available from PANGAEA (https://doi.pangaea.de/10.1594/PANGAEA.900090 (accessed on 20 August 2021). Moreover, GLASS data is available and can be accessed from http://glass.umd.edu (accessed on 20 August 2021). MODIS and VIIRS data are available and can be accessed from https://earthdata.nasa.gov (accessed on 20 August 2021).

Acknowledgments

The authors would like to thank the reviewers and editors for their valuable comments and suggestions. We also thankful for PANGAEA [30], Earth Science Data Systems (ESDS) [35] and Global LAnd Surface (GLASS)—UMD [36] for providing the field measured, fine-resolution reference LAI, MODIS, GLASS, and VIIRS data used in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fang, H.; Ye, Y.; Liu, W.; Wei, S.; Ma, L. Continuous estimation of canopy leaf area index (LAI) and clumping index over broadleaf crop fields: An investigation of the PASTIS-57 instrument and smartphone applications. Agric. For. Meteorol. 2018, 253, 48–61. [Google Scholar] [CrossRef]
  2. Piao, S.; Wang, X.; Park, T.; Chen, C.; Lian, X.; He, Y.; Bjerke, J.; Chen, A.; Ciais, P.; Tømmervik, H.; et al. Characteristics, drivers and feedbacks of global greening. Nat. Rev. Earth Environ. 2020, 1, 14–27. [Google Scholar] [CrossRef]
  3. Running, S.W.; Baldocchi, D.D.; Turner, D.P.; Gower, S.T.; Bakwin, P.S.; Hibbard, K.A. A global terrestrial monitoring network integrating tower fluxes, flask sampling, ecosystem modeling and EOS satellite data. Remote Sens. Environ. 1999, 70, 108–127. [Google Scholar] [CrossRef]
  4. Zhang, W.; Jin, H.; Li, A.; Shao, H.; Xie, X.; Lei, G.; Nan, X.; Hu, G.; Fan, W. Comprehensive Assessment of Performances of Long Time-Series LAI, FVC and GPP Products over Mountainous Areas: A Case Study in the Three-River Source Region, China. Remote Sens. 2022, 14, 61. [Google Scholar] [CrossRef]
  5. Yan, K.; Park, T.; Chen, C.; Xu, B.; Song, W.; Yang, B.; Zeng, Y.; Liu, Z.; Yan, G.; Knyazikhin, Y.; et al. Generating global products of LAI and FPAR from SNPP-VIIRS data: Theoretical background and implementation. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2119–2137. [Google Scholar] [CrossRef]
  6. Baret, F.; Weiss, M.; Lacaze, R.; Camacho, F.; Makhmara, H.; Pacholcyzk, P.; Smets, B. GEOV1: LAI and FAPAR essential climate variables and FCOVER global time series capitalizing over existing products. Part1: Principles of development and production. Remote Sens. Environ. 2013, 137, 299–309. [Google Scholar] [CrossRef]
  7. Xiao, Z.Q.; Liang, S.L.; Wang, J.D.; Xiang, Y.; Zhao, X.; Song, J.L. Long-Time-Series Global Land Surface Satellite Leaf Area Index Product Derived from MODIS and AVHRR Surface Reflectance. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5301–5318. [Google Scholar] [CrossRef]
  8. Xiao, Z.Q.; Liang, S.L.; Wang, J.D.; Chen, P.; Yin, X.J.; Zhang, L.Q.; Song, J.L. Use of General Regression Neural Networks for Generating the GLASS Leaf Area Index Product from Time-Series MODIS Surface Reflectance. IEEE Trans. Geosci. Remote Sens. 2014, 52, 209–223. [Google Scholar] [CrossRef]
  9. Knyazikhin, Y.; Martonchik, J.V.; Myneni, R.B.; Diner, D.J.; Running, S.W. Synergistic algorithm for estimating vegetation canopy leaf area index and fraction of absorbed photosynthetically active radiation from MODIS and MISR data. J. Geophys. Res.-Atmos. 1998, 103, 32257–32275. [Google Scholar] [CrossRef]
  10. Verrelst, J.; Camps-Valls, G.; Muñoz-Marí, J.; Rivera, J.P.; Veroustraete, F.; Clevers, J.G.; Moreno, J. Optical remote sensing and the retrieval of terrestrial vegetation bio-geophysical properties—A review. ISPRS J. Photogramm. Remote Sens. 2015, 108, 273–290. [Google Scholar] [CrossRef]
  11. Baret, F.; Hagolle, O.; Geiger, B.; Bicheron, P.; Miras, B.; Huc, M.; Berthelot, B.; Niño, F.; Weiss, M.; Samain, O.; et al. LAI, fAPAR and fCover CYCLOPES global products derived from VEGETATION. Remote Sens. Environ. 2007, 110, 275–286. [Google Scholar] [CrossRef]
  12. Jiang, C.; Ryu, Y.; Fang, H.; Myneni, R.; Claverie, M.; Zhu, Z. Inconsistencies of interannual variability and trends in long-term satellite leaf area index products. Glob. Chang. Biol. 2017, 23, 4133–4146. [Google Scholar] [CrossRef]
  13. Fang, H.L.; Liang, S.L.; Townshend, J.R.; Dickinson, R.E. Spatially and temporally continuous LAI data sets based on an integrated filtering method: Examples from North America. Remote Sens. Environ. 2008, 112, 75–93. [Google Scholar] [CrossRef]
  14. Yuan, H.; Dai, Y.; Xiao, Z.; Ji, D.; Shangguan, W. Reprocessing the MODIS Leaf Area Index products for land surface and climate modelling. Remote Sens. Environ. 2011, 115, 1171–1187. [Google Scholar] [CrossRef]
  15. Verger, A.; Baret, F.; Weiss, M. A multisensor fusion approach to improve LAI time series. Remote Sens. Environ. 2011, 115, 2460–2470. [Google Scholar] [CrossRef]
  16. Jin, H.; Li, A.; Yin, G.; Xiao, Z.; Bian, J.; Nan, X.; Jing, J. A Multiscale Assimilation Approach to Improve Fine-Resolution Leaf Area Index Dynamics. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8153–8168. [Google Scholar] [CrossRef]
  17. Koetz, B.; Baret, F.; Poilve, H.; Hill, J. Use of coupled canopy structure dynamic and radiative transfer models to estimate biophysical canopy characteristics. Remote Sens. Environ. 2005, 95, 115–124. [Google Scholar] [CrossRef]
  18. Huang, J.X.; Tian, L.Y.; Liang, S.L.; Ma, H.Y.; Becker-Reshef, I.; Huang, Y.B.; Su, W.; Zhang, X.D.; Zhu, D.H.; Wu, W.B. Improving winter wheat yield estimation by assimilation of the leaf area index from Landsat TM and MODIS data into the WOFOST model. Agric. For. Meteorol. 2015, 204, 106–121. [Google Scholar] [CrossRef]
  19. Jin, H.; Xu, W.; Li, A.; Xie, X.; Zhang, Z.; Xia, H. Spatially and Temporally Continuous Leaf Area Index Mapping for Crops through Assimilation of Multi-resolution Satellite Data. Remote Sens. 2019, 11, 2517. [Google Scholar] [CrossRef]
  20. Reichle, R.H. Data assimilation methods in the Earth sciences. Adv. Water Resour. 2008, 31, 1411–1418. [Google Scholar] [CrossRef]
  21. Dickinson, R.E.; Tian, Y.; Liu, Q.; Zhou, L. Dynamics of leaf area for climate and weather models. J. Geophys. Res. Atmos. 2008, 113, D16115. [Google Scholar] [CrossRef]
  22. Han, X.; Li, X. An evaluation of the nonlinear/non-Gaussian filters for the sequential data assimilation. Remote Sens. Environ. 2008, 112, 1434–1449. [Google Scholar] [CrossRef]
  23. Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalhais, N.; Prabhat. Deep learning and process understanding for data-driven Earth system science. Nature 2019, 566, 195–204. [Google Scholar] [CrossRef]
  24. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  25. Zhang, M.; Zhang, X.; Huang, C.; Tang, S.; Qi, W. Maize Leaf Area Index Retrieval Using FY-3B Satellite Data by Long Short-Term Memory Model. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium (IGARS2019) 2019, Yokohama, Japan, 28 July–2 August 2019; pp. 146–149. [Google Scholar]
  26. Ze-hao, L.; Qi-ming, Q.; Tian-yuan, Z.; Wei, X. Prediction of continuous time series leaf area index based on long short-term memory network:a case study of winter wheat. Spectrosc. Spectr. Anal. 2020, 40, 898–904. [Google Scholar]
  27. Ma, H.; Liang, S. Development of the GLASS 250-m leaf area index product (version 6) from MODIS data using the bidirectional LSTM deep learning model. Remote Sens. Environ. 2022, 273, 112985. [Google Scholar] [CrossRef]
  28. Liu, D.; Jia, K.; Xia, M.; Wei, X.; Yao, Y.; Zhang, X.; Tao, G. Fractional Vegetation Cover Estimation Algorithm based on Recurrent Neural Network for MODIS 250 m reflectance data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6532–6543. [Google Scholar] [CrossRef]
  29. Fang, H.; Zhang, Y.; Wei, S.; Li, W.; Ye, Y.; Sun, T.; Liu, W. Validation of global moderate resolution leaf area index (LAI) products over croplands in northeastern China. Remote Sens. Environ. 2019, 233, 111377. [Google Scholar] [CrossRef]
  30. PANGAEA. Available online: https://doi.pangaea.de/10.1594/PANGAEA.900090 (accessed on 20 August 2021).
  31. Center for Resource Satellite Data and Applications (CRESDA). Available online: http://218.247.138.119:7777/DSSPlatform/index.html (accessed on 20 August 2021).
  32. United States Geological Survey (USGS) EarthExplorer. Available online: https://earthexplorer.usgs.gov/ (accessed on 20 August 2021).
  33. Copernicus Open Access Hub. Available online: https://scihub.copernicus.eu/ (accessed on 20 August 2021).
  34. Kuusk, A. A two-layer canopy reflectance model. J. Quant. Spectrosc. Radiat. Transf. 2001, 71, 1–9. [Google Scholar] [CrossRef]
  35. Earth Science Data Systems (ESDS). Available online: https://earthdata.nasa.gov/ (accessed on 20 August 2021).
  36. Global LAnd Surface (GLASS)—UMD. Available online: http://glass.umd.edu/ (accessed on 20 August 2021).
  37. Knyazikhin, Y.; Martonchik, J.V.; Diner, D.J.; Myneni, R.B.; Verstraete, M.; Pinty, B.; Gobron, N. Estimation of vegetation canopy leaf area index and fraction of absorbed photosynthetically active radiation from atmosphere-corrected MISR data. J. Geophys. Res.-Atmos. 1998, 103, 32239–32256. [Google Scholar] [CrossRef]
  38. Land Processes Distributed Active Archive Center (LP DAAC). Available online: https://lpdaac.usgs.gov/ (accessed on 20 August 2021).
  39. Beck, P.S.A.; Atzberger, C.; Hogda, K.A.; Johansen, B.; Skidmore, A.K. Improved monitoring of vegetation dynamics at very high latitudes: A new method using MODIS NDVI. Remote Sens. Environ. 2006, 100, 321–334. [Google Scholar] [CrossRef]
  40. Baek, Y.; Kim, H.Y. ModAugNet: A new forecasting framework for stock market index value with an overfitting prevention LSTM module and a prediction LSTM module. Expert Syst. Appl. 2018, 113, 457–480. [Google Scholar] [CrossRef]
  41. de Castro Filho, H.C.; de Carvalho Júnior, O.A.; de Carvalho, O.L.F.; de Bem, P.P.; dos Santos de Moura, R.; de Albuquerque, A.O.; Silva, C.R.; Ferreira, P.H.G.; Guimarães, R.F.; Gomes, R.A.T. Rice crop detection using LSTM, Bi-LSTM, and machine learning models from sentinel-1 time series. Remote Sens. 2020, 12, 2655. [Google Scholar] [CrossRef]
  42. Gonsamo, A.; Chen, J.M. Improved LAI algorithm implementation to MODIS data by incorporating background, topography, and foliage clumping information. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1076–1088. [Google Scholar] [CrossRef]
  43. Dimitriadis, P.; Iliopoulou, T.; Sargentis, G.-F.; Koutsoyiannis, D. Spatial Hurst–Kolmogorov Clustering. Encyclopedia 2021, 1, 1010–1025. [Google Scholar] [CrossRef]
  44. Xu, J.; Yang, J.; Xiong, X.; Li, H.; Huang, J.; Ting, K.C.; Ying, Y.; Lin, T. Towards interpreting multi-temporal deep learning models in crop mapping. Remote Sens. Environ. 2021, 264, 112599. [Google Scholar] [CrossRef]
  45. Frintrop, S.; Rome, E.; Christensen, H.I. Computational visual attention systems and their cognitive foundations. ACM Trans. Appl. Percept. 2010, 7, 1–39. [Google Scholar] [CrossRef]
  46. Dimitriadis, P.; Koutsoyiannis, D.; Iliopoulou, T.; Papanicolaou, P. A Global-Scale Investigation of Stochastic Similarities in Marginal Distribution and Dependence Structure of Key Hydrological-Cycle Processes. Hydrology 2021, 8, 59. [Google Scholar] [CrossRef]
  47. Bashir, B.; Cao, C.; Naeem, S.; Joharestani, M.Z.; Bo, X.; Afzal, H.; Jamal, K.; Mumtaz, F. Spatio-Temporal Vegetation Dynamic and Persistence under Climatic and Anthropogenic Factors. Remote Sens. 2020, 12, 2612. [Google Scholar] [CrossRef]
  48. Atkinson, P.M.; Jeganathan, C.; Dash, J.; Atzberger, C. Inter-comparison of four models for smoothing satellite sensor time-series data to estimate vegetation phenology. Remote Sens. Environ. 2012, 123, 400–417. [Google Scholar] [CrossRef]
Figure 1. Study area. (a,b) Geographical location; (c) 11 LAI reference maps with the original UTM projection and the area of 30 km2 at the 30 m spatial resolution in 2016 (the text on the top of every image represents the date of the LAI reference map); (d) Study-area size, including 15 × 35 grids at 500 m resolution in the sinusoidal projection. The grey shade indicates the position of LAI reference maps in the sinusoidal projection and the red triangles represent the field plots.
Figure 1. Study area. (a,b) Geographical location; (c) 11 LAI reference maps with the original UTM projection and the area of 30 km2 at the 30 m spatial resolution in 2016 (the text on the top of every image represents the date of the LAI reference map); (d) Study-area size, including 15 × 35 grids at 500 m resolution in the sinusoidal projection. The grey shade indicates the position of LAI reference maps in the sinusoidal projection and the red triangles represent the field plots.
Remotesensing 14 04733 g001
Figure 2. Flowchart of time-series LAI estimations through the integration of deep learning and multiple products.
Figure 2. Flowchart of time-series LAI estimations through the integration of deep learning and multiple products.
Remotesensing 14 04733 g002
Figure 3. DLF, GLASS, MODIS, VIIRS, and reference LAIs during 2014–2016.
Figure 3. DLF, GLASS, MODIS, VIIRS, and reference LAIs during 2014–2016.
Remotesensing 14 04733 g003
Figure 4. Structure of an LSTM memory cell.
Figure 4. Structure of an LSTM memory cell.
Remotesensing 14 04733 g004
Figure 5. Accuracy assessment (R2, RMSE and bias) of the LSTMfusion (a), DLF (b), LSTMGLASS (c), GLASS (d), LSTMMODIS (e), MODIS (f), LSTMVIIRS (g), and VIIRS (h) LAI values against the reference LAI. The color bar indicates different DOYs in 2016 and N refers to the number of the values for independent validation.
Figure 5. Accuracy assessment (R2, RMSE and bias) of the LSTMfusion (a), DLF (b), LSTMGLASS (c), GLASS (d), LSTMMODIS (e), MODIS (f), LSTMVIIRS (g), and VIIRS (h) LAI values against the reference LAI. The color bar indicates different DOYs in 2016 and N refers to the number of the values for independent validation.
Remotesensing 14 04733 g005
Figure 6. Time-series variations in MODIS (a) and VIIRS (b) LAIs during 2014–2016. The red line represents the average values of all available pixels, and the gray area stands for ± one standard deviation.
Figure 6. Time-series variations in MODIS (a) and VIIRS (b) LAIs during 2014–2016. The red line represents the average values of all available pixels, and the gray area stands for ± one standard deviation.
Remotesensing 14 04733 g006
Figure 7. Accuracy assessment of the LSTMMODIS_VIIRS LAI values against the reference LAI.
Figure 7. Accuracy assessment of the LSTMMODIS_VIIRS LAI values against the reference LAI.
Remotesensing 14 04733 g007
Figure 8. Temporal variations in the LSTMfusion (a), MODIS (b), GLASS (c), VIIRS (d), and reference LAIs. The error bar of the reference LAI stands for mean ± one standard deviation.
Figure 8. Temporal variations in the LSTMfusion (a), MODIS (b), GLASS (c), VIIRS (d), and reference LAIs. The error bar of the reference LAI stands for mean ± one standard deviation.
Remotesensing 14 04733 g008
Figure 9. Spatial distributions of LSTMfusion LAI during crop growing season in 2016. The region is the same as the 500 m-resolution grids in Figure 1d. The text on the top of every image (DOY) corresponds to the 8-day composite dates in the MOD09A1 reflectance product.
Figure 9. Spatial distributions of LSTMfusion LAI during crop growing season in 2016. The region is the same as the 500 m-resolution grids in Figure 1d. The text on the top of every image (DOY) corresponds to the 8-day composite dates in the MOD09A1 reflectance product.
Remotesensing 14 04733 g009
Figure 10. RMSEs of the LSTMfusion models based on the 10% (blue), 30% (green) and 50% (purple) noisy reflectance as training samples. The red dotted line represents the retrieval accuracy of the LSTMfusion model without noisy reflectance in the training dataset.
Figure 10. RMSEs of the LSTMfusion models based on the 10% (blue), 30% (green) and 50% (purple) noisy reflectance as training samples. The red dotted line represents the retrieval accuracy of the LSTMfusion model without noisy reflectance in the training dataset.
Remotesensing 14 04733 g010
Figure 11. Variations in the retrieval accuracies of LSTMfusion based on (a) the 10% contaminated reflectance pixels with different ratios of noisy DOYs and (b) the 10% noisy DOYs with various ratios of reflectance pixels in 2016. The blue and red dotted lines separately represent the RMSE and R2 of the LSTMfusion-retrieved accuracy without contaminated data in 2016.
Figure 11. Variations in the retrieval accuracies of LSTMfusion based on (a) the 10% contaminated reflectance pixels with different ratios of noisy DOYs and (b) the 10% noisy DOYs with various ratios of reflectance pixels in 2016. The blue and red dotted lines separately represent the RMSE and R2 of the LSTMfusion-retrieved accuracy without contaminated data in 2016.
Remotesensing 14 04733 g011
Table 1. Locations and crop types of the plots in the study area for field LAI measurements [29].
Table 1. Locations and crop types of the plots in the study area for field LAI measurements [29].
PlotLongitudeLatitudeCrop Type
A126.838°E47.410°NMaize
B126.838°E47.405°NSoybean
C126.805°E47.401°NSoybean
D126.798°E47.409°NMaize
E126.801°E47.429°NSorghum
Table 2. Theoretical performances of the LSTM models.
Table 2. Theoretical performances of the LSTM models.
ModelR2RMSEBias
LSTMfusion0.960.27−0.02
LSTMGLASS0.980.200.01
LSTMMODIS0.880.61−0.03
LSTMVIIRS0.800.670.07
Table 3. RMSE between the retrieved and aggregated reference LAI on multiple ground-measured dates in 2016.
Table 3. RMSE between the retrieved and aggregated reference LAI on multiple ground-measured dates in 2016.
DOYLSTMfusionLSTMGLASSLSTMMODISLSTMVIIRSDLFGLASSMODISVIIRS
1771.261.611.631.561.411.291.371.25
1851.011.282.041.661.321.102.991.73
1930.610.571.290.710.710.781.851.39
2011.431.490.890.981.351.711.381.08
2090.840.960.910.610.981.322.552.53
2170.660.701.010.661.021.182.454.06
2250.510.531.460.620.920.791.951.60
2330.680.671.020.820.851.011.421.32
2410.460.491.530.560.710.691.000.75
2570.420.550.560.570.430.510.420.49
2650.430.350.580.880.550.440.710.63
Table 4. R2 between the retrieved and aggregated reference LAI on multiple ground-measured dates in 2016.
Table 4. R2 between the retrieved and aggregated reference LAI on multiple ground-measured dates in 2016.
DOYLSTMfusionLSTMGLASSLSTMMODISLSTMVIIRSDLFGLASSMODISVIIRS
1770.260.020.050.260.000.020.050.04
1850.400.080.420.150.080.080.000.42
1930.470.210.590.310.150.190.000.37
2010.470.220.580.290.150.200.070.23
2090.520.200.580.470.170.1930.250.10
2170.640.090.590.580.110.090.040.26
2250.660.190.670.610.250.200.030.03
2330.720.170.690.680.270.170.200.14
2410.730.170.720.690.240.170.420.47
2570.000.000.000.010.010.000.270.24
2650.040.010.230.130.010.000.120.33
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, T.; Jin, H.; Li, A.; Fang, H.; Wei, D.; Xie, X.; Nan, X. Estimation of Vegetation Leaf-Area-Index Dynamics from Multiple Satellite Products through Deep-Learning Method. Remote Sens. 2022, 14, 4733. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194733

AMA Style

Liu T, Jin H, Li A, Fang H, Wei D, Xie X, Nan X. Estimation of Vegetation Leaf-Area-Index Dynamics from Multiple Satellite Products through Deep-Learning Method. Remote Sensing. 2022; 14(19):4733. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194733

Chicago/Turabian Style

Liu, Tian, Huaan Jin, Ainong Li, Hongliang Fang, Dandan Wei, Xinyao Xie, and Xi Nan. 2022. "Estimation of Vegetation Leaf-Area-Index Dynamics from Multiple Satellite Products through Deep-Learning Method" Remote Sensing 14, no. 19: 4733. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194733

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop