Next Article in Journal
Utilizing BIM and GIS for Representation and Visualization of 3D Cadastre
Previous Article in Journal
Construction and Verification of a High-Precision Base Map for an Autonomous Vehicle Monitoring System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Effect of NDVI Time Series Density Derived from Spatiotemporal Fusion of Multisource Remote Sensing Data on Crop Classification Accuracy

1
Key Laboratory of Water Cycle and Related Land Surface Processes, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Department of Civil, Environmental and Geomatics Engineering, Florida Atlantic University, Boca Raton, FL 33431, USA
4
Key Laboratory of Animal Ecology and Conservation Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing 100101, China
5
Department of Resources and Environment, Shanxi Institute of Energy, Jinzhong 030600, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2019, 8(11), 502; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi8110502
Submission received: 27 September 2019 / Revised: 4 November 2019 / Accepted: 6 November 2019 / Published: 7 November 2019

Abstract

:
Remote sensing data with high spatial and temporal resolutions can help to improve the accuracy of the estimation of crop planting acreage, and contribute to the formulation and management of agricultural policies. Therefore, it is important to determine whether multisource sensors can obtain high spatial and temporal resolution remote sensing data for the target sensor with the help of the spatiotemporal fusion method. In this study, we employed three different sensor datasets to obtain one normalized difference vegetation index (NDVI) time series dataset with a 5.8-m spatial resolution using a spatial and temporal adaptive reflectance fusion model (STARFM). We studied the effectiveness of using multisource remote sensing data to extract crop classifications and analyzed whether the increase in the NDVI time series density could significantly improve the accuracy of the crop classification. The results indicated that multisource sensor data could be used for crop classification after spatiotemporal fusion and that the data source was not limited by the sensor platform. With the increase in the number of NDVI phases, the classification accuracy of the support vector machine (SVM) and the random forest (RF) classifier gradually improved. If the added NDVI phases were not in the optimal time period for wheat recognition, the classification accuracy was not greatly improved. Under the same conditions, the classification accuracy of the RF classifier was higher than that of the SVM. In addition, this study can serve as a good reference for the selection of the optimal time range for base image pairs in the spatiotemporal fusion method for high accuracy mapping of crops, and help avoid excessive data collection and processing.

1. Introduction

The spatial pattern of crops is the manifestation of agricultural production activities on land use and the efficient usage of natural resources and scientific field management [1,2]. With the growth of populations and the reduction of arable land, timely and accurate mapping of the spatial distribution of crops is an important foundation for growth monitoring, yield estimation, disaster assessment, grain production macro-control, and agricultural trade regulation [3,4]. The spatial patterns are also the evidence that allows relevant departments to promptly formulate agricultural policies to guarantee food security [5].
For a long time, the traditional approach for obtaining crop planting area was based on sampling surveys and step-by-step summaries [6]. However, this process is labor intensive and uses a large quantity of material resources, and the accuracy is greatly affected by a variety of subjective and objective factors [7,8].
Remote sensing data play an important role in many fields, such as air temperature estimation [9], land cover monitoring [10,11], fire incidence assessment [12], drought prediction [13], and monitoring of crop distribution [14,15]. However, due to technological and budget constraints, space-borne sensors must compromise among spatial resolution, temporal resolution, and repeat periods [16], and data from any single sensor cannot fully reflect the spatiotemporal crop patterns [17,18], which generally presents a complex spatiotemporal heterogeneity [19]. For example, a single medium or high spatial resolution remote sensing sensor usually has a long repeat cycle and has a limited potential for capturing the entire growth period of the crop [2,20]; the probability of acquiring high quality remote sensing data (cloud cover <10%) is less than 10% because of the influence of cloud contamination [21] on Landsat series with a 16-day repeat period. On the other hand, a single sensor with high temporal resolution can provide temporally dense data for the monitoring of large-scale vegetation [22,23], but its coarse spatial resolution data generally have a poor ability to describe the spatial heterogeneity [24,25,26]; one example is the moderate resolution imaging spectroradiometer (MODIS). Hence, few sensors currently provide high spatial and temporal resolution remote sensing data for mapping the complexity and diversity of crop planting structures and farmland fragmentation.
Remote sensing spatiotemporal fusion models have been developed to merge high spatial resolution data with high temporal resolution data to produce high spatiotemporal resolution data, which can be used to identify crop distributions with high accuracy [27,28]. Researchers have used the original normalized difference vegetation index (NDVI) or time series derived from the spatial and temporal adaptive reflectance fusion model (STARFM) [29], which is the most widely used model, to study land cover classification or crop classification extraction through the supervised classification method [30,31,32,33,34,35,36]. However, researchers often pair Landsat and MODIS in their studies [37]. Here, we were interested in whether more kinds of sensor data can be used to provide more data for the target with the STARFM and accurately extract crop classifications.
In this study, we attempted to extend the NDVI time series from the ZY-3 satellite data, which are very suitable for crop monitoring [38], by combining the Landsat 8 Operational Land Imager (OLI) and HJ-1A/B charge coupled device (CCD) data, which have 30-m spatial resolution, through the STARFM. The support vector machine (SVM) and random forest (RF) algorithms are two widely used classifiers in the remote sensing community [33,36] and are used for crop classification in this study. The additional purposes of this study are to investigate the effect of the NDVI time series density on the classification accuracy by the two classifiers and to explore the feasibility of obtaining the optimal time period of wheat recognition for the NDVI fusion and classification by using the RF classifier.

2. Study Area and Data

2.1. Study Area

The Yucheng district in Shandong Province of China was selected as the study area and is located in the central eastern portion of the North China Plain, which is one of the most important commodity grain bases in China (Figure 1). The study area covers nearly 990 km2 between 36°41′36″N–37°12′13″N and 116°22′11″E–116°45′00″E. It is situated in a temperate climatic zone and has an annual average temperate of approximately 13.30 °C, an annual average precipitation of nearly 555.5 mm, and an average annual evaporation of 1884.80 mm. The highest elevation in the study area is 27.27 m, and the lowest is 19.20 m.

2.2. Data and Preprocessing

2.2.1. ZY-3 Data

The ZY-3 data with a 5.8-m spatial resolution were downloaded from the China Centre for Resources Satellite Data and Application (CCRSDA). These data cover the whole growth stage of winter wheat from December to June in the study area (Table 1). The surface reflectance data were obtained from the raw downloaded ZY-3 data through orthorectification, radiometric calibration, atmospheric correction, and geometric correction, and were then used to calculate the NDVI. For the preprocessing, the rational polynomial coefficient model was used to orthorectify the ZY-3 data. Atmospheric correction was conducted using Fast Line-of-sight-Atmospheric Analysis of Spectral Hypercubes (FLAASH), which was provided by ENVI version 5.0. We chose a quadratic polynomial calibration algorithm for the geometric correction referencing Google Earth. The co-registration error was less than 0.3 pixels. The UTM-WGS84 Zone 50N was selected as the projection coordinate system.

2.2.2. Landsat 8 OLI Data

Landsat 8 OLI is the newest satellite in the Landsat series. The Landsat 8 data (cloud cover <5%) were downloaded from the United States Geological Survey website (Table 1). The preprocessing of the Landsat 8 data mainly included radiation calibration and atmospheric correction to obtain the surface reflectance data to calculate the NDVI. The atmospheric correction was conducted using FLAASH. Since the downloaded Landsat 8 data had been geometrically corrected based on the terrain data and the identifiable ground feature points coincided with those of ZY-3 according to visual inspection, the bilinear interpolation method was used to directly resample the Landsat 8 data to be 5.8 m to meet the requirements of STARFM, and the projection coordinate system was the same as that of ZY-3.

2.2.3. HJ-1A/B CCD Data

The HJ-1A/B CCD data (cloud cover <5%) were provided by CCRSDA (Table 1) and were processed through radiometric calibration, atmospheric correction and geometric correction to obtain the surface reflectance data and then to obtain the NDVI. The radiometric calibration was completed using the 2015 HJ-1A/B absolute radiometric calibration coefficients provided by CCRSDA. The ZY-3 data were used as the reference to geometrically register the HJ-1A/B CCD data, and the co-registration error was less than 0.3 pixels. The other preprocesses were the same as those for Landsat 8.

3. Method

Figure 2 presents the flowchart of this study. First, all remote sensing data were clipped to the same region. The NDVI was calculated using red and near infrared (NIR) bands. Then, the time series of the Landsat 8 and HJ-1A/B NDVIs were fused with the ZY-3 NDVI using STARFM [29]. To reduce the effect of noise on the vegetation data, the fused NDVI time series was filtered by Time-series Satellite data Analysis Tool (TIMESAT). Finally, the NDVI time series and its corresponding temporal features were used to classify crops using the SVM and RF, respectively, and to evaluate the accuracy of the classification results.

3.1. STARFM

The STARFM is based on one pair of images comprising one medium-resolution reflectance image (M) and one high-resolution reflectance image (L) as the reference date (tk), and then blends the differences (L-M) with another medium spatial resolution reflectance image (M) from the prediction date (t0) to predict the corresponding high spatial resolution reflectance image (L). Under the assumption that the surface reflectance difference does not change over time, a weighted averaging method is used with a moving window centered at the target pixel to reduce the boundary influence of the adjacent pixels. The surface reflectance of the target pixel is predicted by selecting the spectrally similar and cloudless pixels in the moving window as follows:
L ( x w / 2 , y w / 2 , t 0 ) = i = 1 w j = 1 w k = 1 n W i j k × ( M ( x i , y j , t 0 ) + L ( x i , y j , t k ) M ( x i , y j , t k ) )
where w is the size of the moving window, ( x w / 2 , y w / 2 , t 0 ) is the center pixel of the moving window at date t0, L ( x w / 2 , y w / 2 , t 0 ) is the predicted reflectance value of the high spatial resolution value, M (xi,yj,t0) is the medium spatial resolution reflectance value at date t0, and L(xi,yj,tk) and M (xi,yj,tk) represent the high and medium spatial resolution reflectance values, respectively, at the base date tk. The weight Wijk,, which determines how much each similar pixel contributes to the surface reflectance of the target pixel, is decided by the spectral difference, temporal difference, and location difference after the normalization processing [16,29].

3.2. NDVI Time Series and A-G Filter

The NDVI time series is usually continuous and smooth, and can reflect the growth process of the vegetation, but remote sensing data are influenced by various factors in the acquisition process [36]. They will cause the NDVI time series to fluctuate irregularly and may influence the trend analysis and the temporal feature extraction of the NDVI time series [39]. Therefore, it is necessary to filter the noise from the NDVI time series to improve the data quality. The TIMESAT software includes three filtering methods, i.e., asymmetric Gaussian functions (A-G), the double logistic function method (D-L), and the Savitzky–Golay method (S-G). The A-G filtering method has the highest fidelity to the original data [40,41]; therefore, it is more suitable for processing the NDVI time series of the grassland, farmland, and other vegetation types [41,42,43]. Accordingly, this study adopted A-G to remove the noise from the NDVI time series.

3.3. Extracting Temporal Features from NDVI Time Series

The NDVI time series can appropriately reflect the growth of vegetation and contains the information used to distinguish vegetation types [44,45], such as the seasonal and phenological changes. The NDVI time series can represent the seasonal and annual variations in crop growth (i.e., periodic variations from planting, seedling emergence, maturity at harvest) [36]. For winter wheat, there were two peaks in the NDVI curve during the growth period (Figure 3). The first is in early December. After the sowing period, the NDVI of winter wheat increased gradually and reached the first peak, while the NDVI of other vegetation decreased gradually with decreasing temperature. At this stage, the time-NDVI slope of wheat was greater than zero, while that of other vegetation was less than zero. The second occurred between late April and early May. The biomass of winter wheat increased rapidly after rising and jointing, and the NDVI also increased. By early May, the NDVI value reached its maximum, forming a second peak. Then, the leaves gradually withered, and the NDVI gradually decreased, and the curve slope of NDVI was less than zero. However, the curve slope of the NDVI of other vegetation was greater than zero, and the NDVI was still increasing. The change characteristics of the NDVI curve are an important basis for identifying winter wheat.
The three NDVI time series were constructed from 4 ZY-3 NDVIs, 14 ZY-3+Landsat 8 fused NDVIs, and 25 ZY-3+Landsat 8+HJ-1A/B fused NDVIs. The original or fused NDVI time series were not a good choice for land cover classification and could reduce the accuracy of the classification model due to their large amount of redundant information [46]. Therefore, to eliminate the redundant information of the high-dimensional data [30], six temporal features were extracted from the denoised 4, 14, and 25 NDVI time series for classification, including the maximum, minimum, mean, standard deviation, time-accumulated NDVI, and the time of maximum value.

3.4. SVM and RF

The SVM has a unique advantage for analyzing nonlinear local minima with small sample sizes and high dimensional pattern recognition problems [47]. It is widely used in land cover classification [48], identification of forest types [49,50], and crop monitoring [51], and has achieved higher classification accuracy compared with traditional supervised classification methods [32]. The LIBSVM package (provided by http://www.csie.ntu.edu.tw/~cjlin/libsvm) provides multiple SVM classifiers to support multilevel classification and the function of cross-validated models [52].
In this study, we used the default SVM classifier (C-SVC) [52] for crop classification. The kernel functions used in the SVM are generally linear, polynomial, radial basis (RBF) and sigmoidal functions, which directly affect the performance of the classification model. Generally, the number of training data points is much higher than the number of features, and an optimal result can be obtained using the nonlinear kernel function. Therefore, we use the RBF kernel function. At the same time, with the grid search and k-fold cross validation methods, the variable pair (such as γ and the penalty parameter C) with the highest cross-validation accuracy were selected as the optimal global parameters [52].
The random forest (RF) classifier is a flexible and easy-to-use machine learning algorithm. Its essence is the ensemble learning method, which is widely used in land cover classification [33], biomass estimation [53], cloud detection [54], and species classification [55]. Please refer to Breiman [56] for the detailed introduction to the RF. Crop mapping was carried out using RF in the EnMAP-Box tools provided by the German Environmental Mapping and Analysis Program [57].
The NDVI time series with 4, 14, and 25 phases and the corresponding temporal features were combined into six kinds of classifying input vectors, which were termed the NDVI time series classification models (NDVIS/R-TSCM4, NDVIS/R-TSCM14, and NDVIS/F-TSCM25) and the temporal feature classification models (NDVIS/R-TFCM4, NDVIS/R-TFCM14, and NDVIS/F-TFCM25).

3.5. Accuracy Validation

The NDVIs predicted by STARFM were qualitatively and quantitatively evaluated with the observed NDVIs. The quantitative indexes including the determination coefficients (R2), root mean square error (RMSE), average absolute difference (AAD), average difference (AD) [29,58,59], and standard deviation (SD).
The quantitative evaluation indexes, including the confusion matrix, kappa coefficient, overall accuracy, producer accuracy, user accuracy [60], and the true skill statistic (TSS) [61], were calculated using randomly selected sample pixels to evaluate the classification accuracies of NDVIS/R-TSCM4/14/25 and NDVIS/R-TFCM4/14/25. In addition, the statistical areas of the different classification results were also compared.
Combining ground surveys with land-cover type information in the region, we selected 8 kinds of land cover. This process was finished with the help of Google Earth tools and ground survey points. The numbers of wheat and non-wheat sample pixels were 882 and 1333, respectively (Table 2). These samples were selected using a random stratified sampling method to make them representative of the study area. Sixty-five percent of the samples were randomly selected for training, and the other 35% were selected for validation.

4. Result Analysis

4.1. Quantitative Evaluation of Fusion Results

To validate STARFM, a small rectangular region of approximately 1390 × 865 pixels was randomly selected from the observed and predicted NDVI images from 2 December 2014 and 4 February 2015 (Figure 4 and Figure 5). The results illustrated that the NDVI spatial distribution was consistent between the observed and predicted NDVIs, and the difference values between Figure 4a,b were mainly distributed around zero (Figure 4c). According to the correlation analysis, the distribution of the scatter points was concentrated along the line of x=y. The R2 values were 0.969 and 0.950 at the 0.01 significance level, respectively (Figure 5); the RMSEs were 0.040 and 0.039, respectively, which meant that the estimation error of the predicted NDVI image was only approximately 4% (Figure 5). This result indicated that the deviation between the predicted and observed NDVIs was very small.
The AAD, AD, and SD values of Figure 4c were small, more than 97.55% of the pixel difference absolute values were less than 0.1, and almost all of the pixel difference absolute values were less than 0.2, which meant that the predicted results had rather high accuracies (Table 3). Accordingly, STARFM could effectively predict the missing high spatial resolution NDVI values by fusing the NDVIs of different spatial and temporal resolutions.

4.2. Classification Accuracy Assessment

4.2.1. Verification Using Ground Survey Points

The kappa and TSS coefficients were calculated using the 775 validating samples. The confusion matrixes of the twelve classification results are shown in Table 4, Table 5 and Table 6. From Table 4, Table 5 and Table 6, we can obtain the following:
(1) The crop classification accuracy was the highest for NDVIR-TSCM25. The overall accuracy, kappa coefficient, and TSS values of NDVIS-TSCM25 and NDVIS-TFCM25 were smaller than those of NDVIR-TSCM25 and NDVIR-TFCM25 (97.68%, 0.953, 0.953 and 98.06%, 0.961, 0.959 vs 99.10%, 0.982, 0.982 and 98.32%, 0.966, 0.964, respectively) (Table 4).
(2) From Table 5 and Table 6, the overall accuracy and kappa coefficient of NDVIS-TFCM14/4 were higher than those of NDVIS-TSCM14/4. The overall accuracy and kappa coefficient of NDVIR-TFCM14/4 were smaller than those of NDVIR-TSCM14/4. The NDVIR-TFCM and NDVIR-TSCM were superior to NDVIS-TSCM and NDVIS-TFCM when 14 or 4 NDVI time series were used.
(3) By comparing Table 4, Table 5 and Table 6 for any case of NDVI-TSCM or NDVI-TFCM, with the increase in the NDVI phases from 4 to 14 and then to 25 using the SVM and RF classifiers, the overall accuracy, and the kappa and TSS coefficients, had different degrees of improvement. NDVI-TFCM was better than NDVI-TSCM for the SVM classifier, regardless of whether the NDVI phase was 4,14, or 25, but the performance of the RF classifier was the opposite when NDVI-TFCM was used. The NDVIS-TFCM had the highest impact on the classifiers with increasing NDVI time series. Therefore, we concluded that the increase in the NDVI phases could be helpful in improving the classification accuracy for the SVM and RF classifiers.

4.2.2. Verification of Wheat Area

The accuracy of the classification results was further evaluated by verifying the derived acreage. According to the data published by the Yucheng City Bureau of statistics, the wheat acreage of Yucheng was approximately 48,759.33 hectares in 2015. The statistics of wheat acreages from 12 classification results are shown in Table 7. The relative errors for NDVI-TSCM25 and NDVI-TFCM25 were smaller than those for NDVI-TSCM14/4 and NDVI-TFCM14/4, which also showed that with the increase of the NDVI phases from 4 to 14 and then to 25, the accuracies for NDVI-TSCM and NDVI-TFCM were both improved and were consistent with the results of the ground surveys (Table 4, Table 5 and Table 6). The classification accuracy for NDVI-TSCM was higher than that for NDVI-TFCM under the RF classifier, while it was the opposite under the SVM.

4.2.3. Visual Judgment of the Classification Results

Two small ranges in the classification results for NDVIS/R-TSCM4/14/25 and NDVIS/R-TFCM4/14/25 are shown in Figure 6 and Figure 7. Using the ZY-3 data from 2 December 2014, we can obtain a standard false color composite image, which is beneficial for the identification of the vegetation distribution (Figure 6M and Figure 7M). Figure 6N shows the classification result of the whole study area based on NDVIR-TSCM25. If the whole study area was used to compare the classification results, it would not show the detailed differences among the classification results. Therefore, the classification results for the first small-scale region are displayed in Figure 6A–L. By comparing Figure 6A–L, we found that the spatial distributions of the 12 classification results were basically the same, and the main differences were reflected in the linear and small patch features of the classification results. With the increase in the NDVI phases from 4 to 14 and then to 25, the fragmentation degree of the classification results gradually decreased.
A smaller area can better show the detailed differences in the classification results, so the second smaller area is shown in Figure 7. Figure 7N displays a small area that was obtained from Google Earth on 8 February 2015, to discern the wheat region. The classification accuracy with NDVIS-TFCM14/25 was higher than that with NDVIS-TSCM14/25 over the small areas, roads, or other linear features (Figure 7A,B,E,F). The classification accuracy with NDVIS-TSCM4 on the patch features was higher than that with NDVIS-TFCM4, but the identification accuracy of the linear features was weaker than that with NDVIS-TFCM4 (Figure 7I,J). With the increase in the NDVI phases from 14 to 25, the misclassification probability for the roads was reduced, and the ability to distinguish between wheat and non-wheat was stronger (Figure 7A,E). The recognition accuracy with NDVIS-TFCM4 on wheat was significantly weaker than those using NDVIS-TFCM25 and NDVIS-TFCM14 (Figure 7A,E,I). With the increase in the NDVI phases, the recognition accuracy of using NDVIS-TSCM on non-wheat was also improved (Figure 7B,F,J). At the same time, for the SVM classifier, the increase in the NDVI phases from 4 to 14 and then to 25 was helpful for improving the accuracy of classification results, which was consistent with the conclusion from the ground survey.
From visual assessment, NDVIR-TFCM14/25 and NDVIR-TSCM14/25 could have similar classification accuracy, except for in some small areas (Figure 7C,D,G,H). Similarly, the recognition accuracy using NDVIR-TFCM4 and NDVIR-TSCM4 on wheat was significantly weaker than when using of NDVIR-TFCM25/14 and NDVIR-TSCM25/14 (Figure 7C,D,G,H,K,L). With the increase in the NDVI phases, the recognition accuracy of using NDVIR-TSCM on wheat was also improved. The classification accuracy with NDVIR-TSCM was significantly higher than that with NDVIS-TSCM, regardless of whether the NDVI phase was 4, 14, or 25 (Figure 7B,D,F,H,J,L). The increase in the NDVI phase could improve the classification accuracies of the SVM and RF classifiers, and the RF classifier performed slightly better than the SVM classifier. The reason for the high classification accuracy might be that the high spatiotemporal resolution of the NDVI time series contained information about the growth period of the vegetation and provided valuable information for recognizing the vegetation type.

4.3. Effect of Base Images on Classification Accuracy

Extending the NDVI time series through spatiotemporal fusion and combining the machine learning model for crop mapping and land cover classification has been shown to be a valuable strategy. However, the effect of data selection on the performance of the fusion model has seemed to attract little attention. The time density of the NDVI time series directly affects the accuracy of crop classification. Generally, with the increase in the density of the NDVI time series, the classification accuracy also improves, which means that the workload also increases. A relevant question, therefore, is whether fusing all data for use in classification is necessary to obtain higher classification accuracy (e.g., overall accuracy > 95%), bearing in mind that data collection is costly. With the increase in the NDVI temporal phases, the importance of the image dates was judged individually with the RF classifier in this study to determine the selection of image date. The statistical results are shown in Table 8. According to the importance of the variables, the image on 14 April 2015 was the most important, followed by 2 December 2014, in the four-phase NDVI (Table 8). With the increase in the NDVI temporal phases, the optimal time range of the image pair was from 24 March to 30 April 2015, and the other time range was before 3 January 2015 (Table 8), which was basically consistent with the phenological calendar date. With the increase in the NDVI temporal phases, the classification accuracy gradually improved. If the added NDVI phases were not in the optimal time period of wheat recognition, the classification accuracy would not be greatly improved. At the same time, it also proves that the date of the base image pair can be selected by the phenological calendar and the crop NDVI curve [62]. With the assistance of phenological calendar, better classification results can usually be obtained through the selection of remote sensing images near the nodes where the stage of crop growth changes. In addition, when the phenological calendar is not available, with the help of coarse spatial resolution remote sensing data (e.g., MODIS) and the RF classifier, the optimal time range of the image can be obtained, which can avoid excessive data collection and processing, and provides one method for selecting the input data for spatiotemporal fusion.

5. Discussion

According to the classification results, increasing the NDVI temporal phase of NDVI-TFCM and NDVI-TSCM can help to improve the classification accuracy. The reason may be that the NDVI temporal phases increased from 4 to 14 and then to 25, which meant an increase in the image acquisition frequency and a decrease in the time gap between adjacent NDVIs in the study period. This helped to improve the accuracy of crop type identification. Moreover, the temporal information can help determine the growth characteristics of vegetation more accurately, which is also helpful for crop classification, especially for the identification of vegetation types [31,63].
Although MODIS has the advantages of a short revisit period and open access data, compared with the ZY-3 data, its highest spatial resolution is only 250 m, and the mixed pixel phenomenon is very severe. Thus, it is unsuitable to combine MODIS with ZY-3 data to meet the refinement requirements of regional monitoring for condensed areas and small acreage [36]. However, the Landsat 8 and HJ-1A/B data with 30-m spatial resolution can better reflect the spatial heterogeneity in the crop distribution than can MODIS. Therefore, they are more suitable to be combined with the ZY-3 data. Considering that the availability and quality of the Landsat 8 data are better than those of the HJ-1A/B data, we chose the Landsat 8 as the main data source and the HJ-1A/B as an auxiliary data source to provide temporal change information for ZY-3. By evaluating the fusion results, the STARFM can effectively predict the unavailable ZY-3 NDVIs caused by long recursive cycles (59 days) and cloud contamination. The constructed ZY-3-like NDVI time series data were fine enough to detect small objects and regional heterogeneity with spatial scales less than 30 m.
In this study, the accuracies of the fusion results were also affected by the following factors: 1) the qualities of the ZY-3, Landsat 8 and HJ-1A/B data. 2) During the study period, considerable changes with a spatial scale of less than 30 m took place in vegetation growth; however, Landsat 8 and HJ-1A/B data cannot capture such changes. Thus, errors were introduced into the fused images. 3) STARFM was proposed for reflectance fusion, so when it was directly used in the NDVI fusion, some deviations may have been produced. 4) Due to the interference of atmospheric conditions and other factors, such as the bidirectional reflectance distribution function, after a series of image processing, some noise may be introduced into NDVI images inadvertently. Therefore, avoiding noise introduction will also improve the accuracy of fusion results.
In recent years, an increasing number of nonparametric classifiers have been developed, which would be preferable for use in improving classification accuracy in future studies. High spatial resolution images, such as GF-1/2, can be used with the mentioned method for land cover classification. Thus, the combination of the spatiotemporal fusion model and the advanced classification model can play an important role in regional forest cover mapping, land cover mapping, and the identification of crop types.

6. Conclusions

The research in this study showed that multisource sensor data can be used for crop classification after spatiotemporal fusion and that the data source was not limited by the sensor platform, which provides a flexible choice for data sources. The classification accuracy with the NDVI temporal features vector was not necessarily higher than that using the NDVI time series itself, which was related to the selection of the classifier. With the increase in the NDVI phases from 4 to 14 and then to 25, the classification accuracy with the use of NDVI-TSCM and NDVI-TFCM were both improved. The classification accuracy with NDVI-TSCM was higher than that with NDVI-TFCM under the RF classifier, while it had the opposite trend under the SVM. With the increase in the NDVI temporal phase, the classification accuracy was gradually improved. If the added NDVI phases were not in the optimal time period for wheat recognition, the classification accuracy was not greatly improved. In addition, this study can provide a good reference for the selection of the optimal time ranges of base image pairs for spatiotemporal fusion methods for high accuracy mapping of crops, and help avoid excessive data collection and processing.

Author Contributions

Author Contributions: Rui Sun was the main author of the manuscript. Rui Sun, Shaohui Chen and Hongbo Su revised the manuscript. Chunrong Mi and Ning Jin provided some valuable suggestions. All of the authors have read and approved the final manuscript.

Funding

This study is supported by the Projects of Natural Science Fund of China (grant number 41671368 and 41371348), the Second Tibetan Plateau Scientific Expedition and Research Program (grant number 2019QZKK1003), and Strategic Priority Research Program A of the Chinese Academy of Sciences (grant number XDA20010301) .

Acknowledgments

We thank the USGS for providing Landsat data, as well as the China Centre for Resources Satellite Data and Application for providing HJ-1A/B CCD and ZY-3 data. The authors would like to thank the anonymous reviewers for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tang, H.; Wu, W.; Yang, P.; Zhou, Q.; Chen, Z. Recent Progresses in Monitoring Crop Spatial Patterns by Using Remote Sensing Technologies. Sci. Agric. Sin. 2010, 43, 2879–2888. (In Chinese) [Google Scholar]
  2. Ozdogan, M. The spatial distribution of crop types from MODIS data: Temporal unmixing using Independent Component Analysis. Remote Sens. Environ. 2010, 114, 1190–1204. [Google Scholar] [CrossRef]
  3. Chen, S.; Liu, Q.; Chen, L.; Li, J.; Liu, Q. Review of research advances in remote sensing monitoring of grain crop area. Trans. Chin. Soc. Agric. Eng. 2005, 21, 166–171. (In Chinese) [Google Scholar]
  4. Xiao, X.M.; Boles, S.; Frolking, S.; Li, C.S.; Babu, J.Y.; Salas, W.; Moore, B. Mapping paddy rice agriculture in South and Southeast Asia using multi-temporal MODIS images. Remote Sens. Environ. 2006, 100, 95–113. [Google Scholar] [CrossRef]
  5. David, V.; Sturza, M. Crop area estimation with remote sensing. Res. J. Agric. Sci. 2010, 42, 531–534. [Google Scholar]
  6. Yao, F.; Feng, L.; Zhang, J. Corn Area Extraction by the Integration of MODIS-EVI Time Series Data and China’s Environment Satellite (HJ-1) Data. J. Indian Soc. Remote Sens. 2014, 42, 859–867. [Google Scholar] [CrossRef]
  7. Liu, H.; He, B.; Zhang, H. Anhui Winter Wheat Growing Remote Sensing Monitoring and Evaluation Methods Research. Chin. Agric. Sci. Bull. 2011, 27, 18–22. (In Chinese) [Google Scholar]
  8. Li, H.; Zhang, S.; Wang, Z. Application and Analysis of MODIS Satellite NDVI Time Series Change in Winter Wheat Area Estimate. Meteorol. Environ. Sci. 2011, 34, 46–49. (In Chinese) [Google Scholar]
  9. Pelta, R.; Chudnovsky, A.A. Spatiotemporal estimation of air temperature patterns at the street level using high resolution satellite imagery. Sci. Total Environ. 2017, 579, 675–684. [Google Scholar] [CrossRef]
  10. Xystrakis, F.; Psarras, T.; Koutsias, N. A process-based land use/land cover change assessment on a mountainous area of Greece during 1945–2009: Signs of socio-economic drivers. Sci. Total Environ. 2017, 587, 360–370. [Google Scholar] [CrossRef]
  11. Restrepo, A.M.C.; Yang, Y.R.; Hamm, N.A.; Gray, D.J.; Barnes, T.S.; Williams, G.M.; Magalhães, R.J.S.; McManus, D.P.; Guo, D.; Clements, A.C. Land cover change during a period of extensive landscape restoration in Ningxia Hui Autonomous Region, China. Sci. Total Environ. 2017, 598, 669–679. [Google Scholar] [CrossRef] [PubMed]
  12. Alves, D.B.; Pérez-Cabello, F. Multiple remote sensing data sources to assess spatio-temporal patterns of fire incidence over Campos Amazônicos Savanna Vegetation Enclave (Brazilian Amazon). Sci. Total Environ. 2017, 601, 142–158. [Google Scholar] [CrossRef] [PubMed]
  13. Nichol, J.E.; Abbas, S. Integration of remote sensing datasets for local scale assessment and prediction of drought. Sci. Total Environ. 2015, 505, 503–507. [Google Scholar] [CrossRef] [PubMed]
  14. Zhong, L.; Gong, P.; Biging, G.S. Efficient corn and soybean mapping with temporal extendability: A multi-year experiment using Landsat imagery. Remote Sens. Environ. 2014, 140, 1–13. [Google Scholar] [CrossRef]
  15. Atzberger, C.; Rembold, F. Mapping the Spatial Distribution of Winter Crops at Sub-Pixel Level Using AVHRR NDVI Time Series and Neural Nets. Remote Sens. 2013, 5, 1335–1354. [Google Scholar] [CrossRef] [Green Version]
  16. Sun, R.; Rong, Y.; SU, H.; Chen, S. NDVI time-series reconstruction based on MODIS and HJ-1 CCD data spatial–temporal fusion. J. Remote Sens. 2016, 20, 361–373. (In Chinese) [Google Scholar]
  17. Witharana, C.; Civco, D.L.; Meyer, T.H. Evaluation of data fusion and image segmentation in earth observation based rapid mapping workflows. ISPRS J. Photogramm. Remote Sens. 2014, 87, 1–18. [Google Scholar] [CrossRef]
  18. Song, Q.; Zhou, Q.; Wu, W.; Hu, Q.; Yu, Q.; Tang, H. Recent Progresses in Research of Integrating Multi-Source Remote Sensing Data for Crop Mapping. Sci. Agric. Sin. 2015, 48, 1122–1135. (In Chinese) [Google Scholar]
  19. Le Maire, G.; Dupuy, S.; Nouvellon, Y.; Loos, R.A.; Hakarnada, R. Mapping short-rotation plantations at regional scale using MODIS time series: Case of eucalypt plantations in Brazil. Remote Sens. Environ. 2014, 152, 136–149. [Google Scholar] [CrossRef]
  20. Qiao, H.-B.; Cheng, D.-F.; Soc, I.C. Application of EOS/MODIS-NDVI at Different Time Sequences on Monitoring Winter Wheat Acreage in Henan Province. In Proceedings of the 2009 International Conference on Environmental Science and Information Application Technology, Wuhan, China, 4–5 July 2009; pp. 113–115. [Google Scholar]
  21. Leckie, D.G. Advances in remote sensing technologies for forest surveys and management. Can. J. For. Res. 1990, 20, 464–483. [Google Scholar] [CrossRef]
  22. Holben, B.N. Characteristics of Maximum-Value Composite Images from Temporal Avhrr Data. Int. J. Remote Sens. 1986, 7, 1417–1434. [Google Scholar] [CrossRef]
  23. Justice, C.O.; Townshend, J.R.G.; Holben, B.N.; Tucker, C.J. Analysis of the Phenology of Global Vegetation Using Meteorological Satellite Data. Int. J. Remote Sens. 1985, 6, 1271–1318. [Google Scholar] [CrossRef]
  24. Turker, M.; Ozdarici, A. Field-based crop classification using SPOT4, SPOT5, IKONOS and QuickBird imagery for agricultural areas: A comparison study. Int. J. Remote Sens. 2011, 32, 9735–9768. [Google Scholar] [CrossRef]
  25. Mathur, A.; Foody, G.M. Crop classification by support vector machine with intelligently selected training data for an operational application. Int. J. Remote Sens. 2008, 29, 2227–2240. [Google Scholar] [CrossRef] [Green Version]
  26. Yang, C.; Everitt, J.H.; Murden, D. Evaluating high resolution SPOT 5 satellite imagery for crop identification. Comput. Electron. Agric. 2011, 75, 347–354. [Google Scholar] [CrossRef]
  27. Zhang, J.; Shen, K.; Pan, Y.; Li, L.; Hou, D. HJ-1 Remotely Sensed Data and Sampling Method for Wheat Area Estimation. Sci. Agric. Sin. 2010, 43, 3306–3315. (In Chinese) [Google Scholar]
  28. Zheng, C.; Wang, X.; Huang, J. Decision Tree Algorithm of Automatically Extracting Paddy Rice Information 5from SPOT-5 Images Based on Characteristic Bands. Remote Sens. Technol. Appl. 2008, 23, 294–299. (In Chinese) [Google Scholar]
  29. Gao, F.; Masek, J.; Schwaller, M.; Hall, F. On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2207–2218. [Google Scholar]
  30. Jia, K.; Liang, S.L.; Zhang, L.; Wei, X.Q.; Yao, Y.J.; Xie, X.H. Forest cover classification using Landsat ETM plus data and time series MODIS NDVI data. Int. J. Appl. Earth Obs. Geoinf. 2014, 33, 32–38. [Google Scholar] [CrossRef]
  31. Jia, K.; Liang, S.; Zhang, N.; Wei, X.; Gu, X.; Zhao, X.; Yao, Y.; Xie, X. Land cover classification of finer resolution remote sensing data integrating temporal features from time series coarser resolution data. ISPRS J. Photogramm. Remote Sens. 2014, 93, 49–55. [Google Scholar] [CrossRef]
  32. Jia, K.; Liang, S.L.; Wei, X.Q.; Yao, Y.J.; Su, Y.R.; Jiang, B.; Wang, X.X. Land Cover Classification of Landsat Data with Phenological Features Extracted from Time Series MODIS NDVI Data. Remote Sens. 2014, 6, 11518–11532. [Google Scholar] [CrossRef] [Green Version]
  33. Pelletier, C.; Valero, S.; Inglada, J.; Champion, N.; Dedieu, G. Assessing the robustness of Random Forests to map land cover with high resolution satellite image time series over large areas. Remote Sens. Environ. 2016, 187, 156–168. [Google Scholar] [CrossRef]
  34. Pervez, M.S.; Budde, M.; Rowland, J. Mapping irrigated areas in Afghanistan over the past decade using MODIS NDVI. Remote Sens. Environ. 2014, 149, 155–165. [Google Scholar] [CrossRef] [Green Version]
  35. Zheng, B.; Myint, S.W.; Thenkabail, P.S.; Aggarwal, R.M. A support vector machine to identify irrigated crop types using time-series Landsat NDVI data. Int. J. Appl. Earth Obs. Geoinf. 2015, 34, 103–112. [Google Scholar] [CrossRef]
  36. Jin, N.; Tao, B.; Ren, W.; Feng, M.; Sun, R.; He, L.; Zhuang, W.; Yu, Q. Mapping Irrigated and Rainfed Wheat Areas Using Multi-Temporal Satellite Data. Remote Sens. 2016, 8, 207. [Google Scholar] [CrossRef]
  37. Olexa, E.M.; Lawrence, R.L. Performance and effects of land cover type on synthetic surface reflectance data and NDVI estimates for assessment and monitoring of semi-arid rangeland. Int. J. Appl. Earth Obs. Geoinf. 2014, 30, 30–41. [Google Scholar] [CrossRef]
  38. Bisquert, M.; Bordogna, G.; Begue, A.; Candiani, G.; Teisseire, M.; Poncelet, P. A Simple Fusion Method for Image Time Series Based on the Estimation of Image Temporal Validity. Remote Sens. 2015, 7, 704–724. [Google Scholar] [CrossRef] [Green Version]
  39. Los, S.O.; Justice, C.O.; Tucker, C.J. A global 1° by 1° NDVI data set for climate studies derived from the GIMMS continental NDVI data. Int. J. Remote Sens. 1994, 15, 3493–3518. [Google Scholar] [CrossRef]
  40. Cao, Y.; Wang, Z.; Den, F. Fidelity Performance of Three Filters for High Quality NDVI Time-series Analysis. Remote Sens. Technol. Appl. 2010, 25, 118–125. (In Chinese) [Google Scholar]
  41. Wang, Q.; Yu, X.; Shu, Q.; Shang, K.; Wen, K. Comparison on Three Algorithms of Reconstructing Time-series MODIS EVI. J. Geo-Inf. Sci. 2015, 17, 732–741. (In Chinese) [Google Scholar]
  42. Li, T.; Zhu, X.; Pan, Y.; Liu, X. NDVI Time-series Reconstruction Methods of China’s HJ Satellite Imagery. Remote Sens. Inf. 2015, 30, 58–65. (In Chinese) [Google Scholar]
  43. Song, C.; You, S.; Ke, L.; Liu, G. Analysis on Three NDVI Time-series Reconstruction Methods and Their Applications in North Tibet. J. Geo-Inf. Sci. 2011, 13, 133–143. (In Chinese) [Google Scholar] [CrossRef]
  44. Brown, J.C.; Kastens, J.H.; Coutinho, A.C.; Victoria, D.d.C.; Bishop, C.R. Classifying multiyear agricultural land use data from Mato Grosso using time-series MODIS vegetation index data. Remote Sens. Environ. 2013, 130, 39–50. [Google Scholar] [CrossRef] [Green Version]
  45. Xiao, X.M.; Boles, S.; Liu, J.Y.; Zhuang, D.F.; Liu, M.L. Characterization of forest types in Northeastern China, using multi-temporal SPOT-4 VEGETATION sensor data. Remote Sens. Environ. 2002, 82, 335–348. [Google Scholar] [CrossRef]
  46. Vaiphasa, C.; Skidmore, A.K.; de Boer, W.F.; Vaiphasa, T. A hyperspectral band selector for plant species discrimination. ISPRS J. Photogramm. Remote Sens. 2007, 62, 225–235. [Google Scholar] [CrossRef]
  47. Vapnik, V.N. Statistical Learning Theory; Wiley: New York, NY, USA, 1998; Volume 1. [Google Scholar]
  48. Liu, Y.; Zhang, B.; Huang, L.; Wang, L. A novel optimization parameters of support vector machines model for the land use/cover classification. J. Food Agric. Environ. 2012, 10, 1098–1104. [Google Scholar]
  49. Knorn, J.; Rabe, A.; Radeloff, V.C.; Kuemmerle, T.; Kozak, J.; Hostert, P. Land cover mapping of large areas using chain classification of neighboring Landsat satellite images. Remote Sens. Environ. 2009, 113, 957–964. [Google Scholar] [CrossRef]
  50. Heikkinen, V.; Tokola, T.; Parkkinen, J.; Korpela, I.; Jaaskelainen, T. Simulated Multispectral Imagery for Tree Species Classification Using Support Vector Machines. Ieee Trans. Geosci. Remote Sens. 2010, 48, 1355–1364. [Google Scholar] [CrossRef]
  51. Lardeux, C.; Frison, P.-L.; Tison, C.; Souyris, J.-C.; Stoll, B.; Fruneau, B.; Rudant, J.-P. Support Vector Machine for Multifrequency SAR Polarimetric Data Classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 4143–4152. [Google Scholar] [CrossRef]
  52. Chang, C.-C.; Lin, C.-J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27. [Google Scholar] [CrossRef]
  53. Li, W.A.; Zhou, X.; Zhu, X.; Dong, Z.; Guo, W. Estimation of biomass in wheat using random forest regression algorithm and remote sensing data. Crop J. 2016, 4, 212–219. [Google Scholar] [Green Version]
  54. Ghasemian, N.; Akhoondzadeh, M. Introducing two Random Forest based methods for cloud detection in remote sensing images. Adv. Space Res. 2018. S0273117718303624. [Google Scholar] [CrossRef]
  55. Harrison, D.; Rivard, B.; Sánchezazofeifa, A. Classification of tree species based on longwave hyperspectral data from leaves, a case study for a tropical dry forest. Int. J. Appl. Earth Obs. Geoinf. 2018, 66, 93–105. [Google Scholar] [CrossRef]
  56. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  57. van der Linden, S.; Rabe, A.; Held, M.; Jakimow, B.; Leitão, P.; Okujeni, A.; Schwieder, M.; Suess, S.; Hostert, P. The EnMAP-Box—A Toolbox and Application Programming Interface for EnMAP Data Processing. Remote Sens. 2015, 7, 11249–11266. [Google Scholar] [CrossRef]
  58. Walker, J.J.; de Beurs, K.M.; Wynne, R.H.; Gao, F. Evaluation of Landsat and MODIS data fusion products for analysis of dryland forest phenology. Remote Sens. Environ. 2012, 117, 381–393. [Google Scholar] [CrossRef]
  59. Zhu, X.L.; Chen, J.; Gao, F.; Chen, X.H.; Masek, J.G. An enhanced spatial and temporal adaptive reflectance fusion model for complex heterogeneous regions. Remote Sens. Environ. 2010, 114, 2610–2623. [Google Scholar] [CrossRef]
  60. Foody, G.M. Classification accuracy comparison: Hypothesis tests and the use of confidence intervals in evaluations of difference, equivalence and non-inferiority. Remote Sens. Environ. 2009, 113, 1658–1663. [Google Scholar] [CrossRef] [Green Version]
  61. Allouche, O.; Tsoar, A.; Kadmon, R. Assessing the accuracy of species distribution models: Prevalence, kappa and the true skill statistic (TSS). J. Appl. Ecol. 2006, 43, 1223–1232. [Google Scholar] [CrossRef]
  62. Zhao, Q.; Jiang, L.; Li, W.; Feng, Z. Spatial-temporal pattern change of winter wheat area in northwest Shandong Province during 2000–2014. Remote Sens. Land Resour. 2017, 29, 173–180. [Google Scholar]
  63. Gu, Y.; Brown, J.F.; Miura, T.; Van Leeuwen, W.J.; Reed, B.C. Phenological classification of the United States: A geographic framework for extending multi-sensor time-series data. Remote Sens. 2010, 2, 526–544. [Google Scholar] [CrossRef]
Figure 1. The location of Shandong Province in China (A); the location of the study area in Shandong Province (B); the study area is shown in one true color (R: red, G: green, B: blue) image using ZY-3 data of 2 December 2014 (C).
Figure 1. The location of Shandong Province in China (A); the location of the study area in Shandong Province (B); the study area is shown in one true color (R: red, G: green, B: blue) image using ZY-3 data of 2 December 2014 (C).
Ijgi 08 00502 g001
Figure 2. The flowchart of crop classification based on the spatial and temporal adaptive reflectance fusion model (STARFM).
Figure 2. The flowchart of crop classification based on the spatial and temporal adaptive reflectance fusion model (STARFM).
Ijgi 08 00502 g002
Figure 3. The reference normalized difference vegetation index (NDVI) time series of typical objects in the study area.
Figure 3. The reference normalized difference vegetation index (NDVI) time series of typical objects in the study area.
Ijgi 08 00502 g003
Figure 4. Comparison of observed (a) and predicted NDVIs (b), and the distribution of difference NDVI (c).
Figure 4. Comparison of observed (a) and predicted NDVIs (b), and the distribution of difference NDVI (c).
Ijgi 08 00502 g004
Figure 5. Scatter plot of observed and predicted NDVIs.
Figure 5. Scatter plot of observed and predicted NDVIs.
Ijgi 08 00502 g005
Figure 6. Comparison of crop classification accuracy between NDVI-TSCM and NDVI-TFCM over the first small region; the classification results of NDVIS-TFCM25 (A), NDVIS-TSCM25 (B), NDVIR-TFCM25 (C), NDVIR-TSCM25 (D), NDVIS-TFCM14 (E), NDVIS-TSCM14 (F), NDVIR-TFCM14 (G), NDVIR-TSCM14 (H), NDVIS-TFCM4 (I), NDVIS-TSCM4 (J), NDVIR-TFCM4 (K), and NDVIR-TSCM4 (L); a standard false color composite image using the ZY-3 data of 2 December 2014 (M); the classification result of the whole study area based on NDVIR-TSCM25 (N).
Figure 6. Comparison of crop classification accuracy between NDVI-TSCM and NDVI-TFCM over the first small region; the classification results of NDVIS-TFCM25 (A), NDVIS-TSCM25 (B), NDVIR-TFCM25 (C), NDVIR-TSCM25 (D), NDVIS-TFCM14 (E), NDVIS-TSCM14 (F), NDVIR-TFCM14 (G), NDVIR-TSCM14 (H), NDVIS-TFCM4 (I), NDVIS-TSCM4 (J), NDVIR-TFCM4 (K), and NDVIR-TSCM4 (L); a standard false color composite image using the ZY-3 data of 2 December 2014 (M); the classification result of the whole study area based on NDVIR-TSCM25 (N).
Ijgi 08 00502 g006
Figure 7. Comparison of crop classification accuracy between NDVI-TSCM and NDVI-TFCM over the second small region; the classification results of NDVIS-TFCM25 (A), NDVIS-TSCM25 (B), NDVIR-TFCM25 (C), NDVIR-TSCM25 (D), NDVIS-TFCM14 (E), NDVIS-TSCM14 (F), NDVIR-TFCM14 (G), NDVIR-TSCM14 (H), NDVIS-TFCM4 (I), NDVIS-TSCM4 (J), NDVIR-TFCM4 (K), and NDVIR-TSCM4 (L); a standard false color composite image using the ZY-3 data of 2 December 2014 (M); a small area that was obtained from Google Earth on 8 February 2015 (N).
Figure 7. Comparison of crop classification accuracy between NDVI-TSCM and NDVI-TFCM over the second small region; the classification results of NDVIS-TFCM25 (A), NDVIS-TSCM25 (B), NDVIR-TFCM25 (C), NDVIR-TSCM25 (D), NDVIS-TFCM14 (E), NDVIS-TSCM14 (F), NDVIR-TFCM14 (G), NDVIR-TSCM14 (H), NDVIS-TFCM4 (I), NDVIS-TSCM4 (J), NDVIR-TFCM4 (K), and NDVIR-TSCM4 (L); a standard false color composite image using the ZY-3 data of 2 December 2014 (M); a small area that was obtained from Google Earth on 8 February 2015 (N).
Ijgi 08 00502 g007
Table 1. List of the data used in this study.
Table 1. List of the data used in this study.
SensorYearDate
ZY-3201412/2
20152/44/146/12
Landsat 8201412/212/1812/25
20151/31/101/192/43/153/244/255/186/3
HJ-1A/B CCD20153/13/73/113/274/154/204/305/45/125/206/66/86/12
Table 2. Samples of the different land covers.
Table 2. Samples of the different land covers.
Sample TypesNon-WheatWheat
Urban and Artificial BuildingsVillagesRoadBare LandOther CropsForestsWaterWheat
Number of samples32723713312098231187882
Total1333882
Table 3. Quantitative evaluation results of the difference images.
Table 3. Quantitative evaluation results of the difference images.
Evaluation IndexPredicted NDVI
2 December 20144 February 2015
AAD0.0300.029
AD0.018−0.018
SD0.0360.034
p < 0.197.57%98.17%
p < 0.299.93%99.96%
*p: percentage of pixel difference absolute value.
Table 4. Confusion matrix for the classification results using NDVI-TSCM25 and NDVI-TFCM25 under the SVM and RF classifiers (%).
Table 4. Confusion matrix for the classification results using NDVI-TSCM25 and NDVI-TFCM25 under the SVM and RF classifiers (%).
Classification ModelNDVIS-TSCM25NDVIR-TSCM25NDVIS-TFCM25NDVIR-TFCM25
WheatPro. Acc95.9298.6096.8096.91
User Acc98.8099.4398.8199.42
Non-wheatPro. Acc99.0799.5299.0799.52
User Acc96.8398.8297.4997.43
Overall accuracy 97.6899.1098.0698.32
Kappa coefficient0.9530.9820.9610.966
TSS coefficient0.9500.9820.9590.964
Table 5. Confusion matrix for the classification results using NDVI-TSCM14 and NDVI-TFCM14 under the support vector machine (SVM) and random forest (RF) classifiers (%).
Table 5. Confusion matrix for the classification results using NDVI-TSCM14 and NDVI-TFCM14 under the support vector machine (SVM) and random forest (RF) classifiers (%).
Classification ModelNDVIS-TSCM14NDVIR-TSCM14NDVIS-TFCM14NDVIR-TFCM14
WheatPro. Acc92.7198.0394.7597.19
User Acc99.0699.4398.4898.86
Non-wheatPro. Acc99.3199.5298.8499.05
User Acc94.4998.3595.9697.65
Overall accuracy 96.39 98.8497.0398.19
Kappa coefficient0.9260.9760.9400.964
TSS coefficient0.9200.9750.9360.962
Table 6. Confusion matrix for the classification results using NDVI-TSCM4 and NDVI-TFCM4 under the SVM and RF classifiers (%).
Table 6. Confusion matrix for the classification results using NDVI-TSCM4 and NDVI-TFCM4 under the SVM and RF classifiers (%).
Classification ModelNDVIS-TSCM4NDVIR-TSCM4NDVIS-TFCM4NDVIR-TFCM4
WheatPro. Acc93.2997.4792.4296.07
User Acc97.2698.8698.4599.71
Non-wheatPro. Acc97.9299.0498.8499.76
User Acc94.8497.8894.2696.76
Overall accuracy 95.87 98.3296.0098.06
Kappa coefficient0.9160.9660.9180.961
TSS coefficient0.9120.9650.9120.958
* Note: Pro. Acc.: Producer’s Accuracy. User Acc.: User’s Accuracy
Table 7. Quantitative evaluation of wheat statistical area.
Table 7. Quantitative evaluation of wheat statistical area.
Classification ModelNDVI-TSCM25NDVI-TFCM25 NDVI-TSCM14NDVI-TFCM14NDVI-TSCM4NDVI-TFCM4
Statistical areas (ha)SVM46,589.5446,652.9346,496.9046,555.4151,153.4146,394.50
RF48,849.8446,648.0148,532.6946,518.3147,594.7842,258.97
Published data (ha)48,759.33
Relative error (%)SVM4.454.324.644.524.914.85
RF0.194.330.464.602.3913.33
Table 8. Determination of the date range of important images.
Table 8. Determination of the date range of important images.
Variable ImportanceNumber of Phases
25 Phases23 Phases21 Phases14 Phases8 Phases4 Phases
20142-Dec3.173.116.83.546.27 15.53
18-Dec8.72--7.5318.46 -
25-Dec6.749.17-6.49 -
20153-Jan2.572.194.461.614.05 -
10-Jan0.952.662.140.911.89 -
19-Jan0.41.211.50.39 -
4-Feb0.520.251.891.081.21 6.25
1-Mar0.771.721.94---
7-Mar0.520.280.23---
11-Mar0.291.571.89---
15-Mar1.741.832.761.34--
24-Mar8.77--13.22--
27-Mar5.717----
14-Apr3.685.088.944.1913.68 24.57
20-Apr1.663.593.832.54--
25-Apr2.042.63.76---
30-Apr1.662.282.342.22--
4-May1.011.212.78---
12-May0.730.70.57---
18-May1.20.630.91---
20-May0.450.170.860.711.99 -
3-Jun0.080.380.9---
6-Jun0.070.130.17---
8-Jun0.080.240.31---
12-Jun0.240.620.40.291.521.8
Overall accuracy99.1098.9798.8498.8498.41 98.32
Kappa0.9820.9800.9770.9690.970 0.966

Share and Cite

MDPI and ACS Style

Sun, R.; Chen, S.; Su, H.; Mi, C.; Jin, N. The Effect of NDVI Time Series Density Derived from Spatiotemporal Fusion of Multisource Remote Sensing Data on Crop Classification Accuracy. ISPRS Int. J. Geo-Inf. 2019, 8, 502. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi8110502

AMA Style

Sun R, Chen S, Su H, Mi C, Jin N. The Effect of NDVI Time Series Density Derived from Spatiotemporal Fusion of Multisource Remote Sensing Data on Crop Classification Accuracy. ISPRS International Journal of Geo-Information. 2019; 8(11):502. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi8110502

Chicago/Turabian Style

Sun, Rui, Shaohui Chen, Hongbo Su, Chunrong Mi, and Ning Jin. 2019. "The Effect of NDVI Time Series Density Derived from Spatiotemporal Fusion of Multisource Remote Sensing Data on Crop Classification Accuracy" ISPRS International Journal of Geo-Information 8, no. 11: 502. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi8110502

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop