Next Article in Journal
Retrieval and Validation of XCO2 from TanSat Target Mode Observations in Beijing
Previous Article in Journal
Continuous Forest Monitoring Using Cumulative Sums of Sentinel-1 Timeseries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Recent Applications of Landsat 8/OLI and Sentinel-2/MSI for Land Use and Land Cover Mapping: A Systematic Review

General Coordination of Earth Observation (OBT), Remote Sensing Division (DSR), National Institute for Space Research (INPE), Av. dos Astronautas, São José dos Campos 1758, Brazil
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(18), 3062; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12183062
Submission received: 30 July 2020 / Revised: 12 September 2020 / Accepted: 15 September 2020 / Published: 18 September 2020

Abstract

:
Recent applications of Landsat 8 Operational Land Imager (L8/OLI) and Sentinel-2 MultiSpectral Instrument (S2/MSI) data for acquiring information about land use and land cover (LULC) provide a new perspective in remote sensing data analysis. Jointly, these sources permit researchers to improve operational classification and change detection, guiding better reasoning about landscape and intrinsic processes, as deforestation and agricultural expansion. However, the results of their applications have not yet been synthesized in order to provide coherent guidance on the effect of their applications in different classification processes, as well as to identify promising approaches and issues which affect classification performance. In this systematic review, we present trends, potentialities, challenges, actual gaps, and future possibilities for the use of L8/OLI and S2/MSI for LULC mapping and change detection. In particular, we highlight the possibility of using medium-resolution (Landsat-like, 10–30 m) time series and multispectral optical data provided by the harmonization between these sensors and data cube architectures for analysis-ready data that are permeated by publicizations, open data policies, and open science principles. We also reinforce the potential for exploring more spectral bands combinations, especially by using the three Red-edge and the two Near Infrared and Shortwave Infrared bands of S2/MSI, to calculate vegetation indices more sensitive to phenological variations that were less frequently applied for a long time, but have turned on since the S2/MSI mission. Summarizing peer-reviewed papers can guide the scientific community to the use of L8/OLI and S2/MSI data, which enable detailed knowledge on LULC mapping and change detection in different landscapes, especially in agricultural and natural vegetation scenarios.

1. Introduction

Land Use and Land Cover (LULC) are classical concepts and crucial information to understand the relationship between humans and the environment [1]. Land Cover refers to the physical characteristics of the Earth’s surface, such as vegetation, water, and soil, while Land Use refers to the purposes for which humans exploit the Land Cover, such as changes made by anthropogenic activities [2]. LULC changes (LULCC) refer to the dynamics of this interaction and can be represented by humans (e.g., deforestation, urbanization, and agriculture intensification) or natural actions (e.g., droughts, floods, and natural fires) [2]. Therefore, LULC data can support our perception of coupled human–environment systems [3]. Nowadays, the importance of accurate LULC data has been expanded to promote the implementation of policies related to the management of natural resources and environmental problems, such as food security, climate change, deforestation, and agriculture dynamics [4,5]. This reinforces the need for detailed mappings to enable sustainable development [6,7,8].
Remote sensing satellite data are a valuable source to produce up-to-date LULC classifications [3], and big Earth Observation (EO) data sets are constantly applied for getting information about LULC and LULCC [9]. The use of Satellite Image Time Series (SITS), for example, allows a better reasoning of the landscape and intrinsic processes such as deforestation, greenhouse gas emission, and agricultural expansion [10]. However, LULC classification is a complex process influenced by many factors [11]. Over the last two decades, the main efforts in terms of algorithms, products, and applications have been especially related to moderate spatial resolution instruments, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor (250 × 250 m), because the data derived from this spatial degree have become more available and easier to process [12]. Phenology metrics, for example, which are relevant features to understand phenological cycles and to detect subtle dissimilarities among similar classes, have been routinely generated from moderate resolution data [13].
The efforts using MODIS have achieved relevant results [14,15,16]. However, for many applications, information is required at finer spatial resolutions [12]. As cited by Chen et al. [17], medium-resolution (Landsat-like, 10–30 m) is more adequate to detect most human–nature interactions, and the low to moderate resolution sensors are not appropriate for high-precision LULC classification research at regional scales [18]. For detailed information on plant phenology, it is important to derive this data from medium-resolution sensors, such as Landsat 8 Operational Land Imager (L8/OLI) and Sentinel-2 MultiSpectral Instrument (S2/MSI) [4]. This factor has expanded the handling and analysis of big Earth spatio-temporal datasets of multi-source data with a Landsat-like resolution [19], and the increase in spatial resolution, free access policies, and systematic global coverage have created a new era to improve LULC and LULCC applications [20,21]. Landsat and Sentinel represent the two flagships for medium-resolution land imaging [22]. Since L8/OLI and S2/MSI data share many technical characteristics [23], they represent cost-effective tools to describe landscape dynamics at broad scales. Literature shows the importance of L8/OLI and S2/MSI data application for producing timely and accurate maps [24,25]. However, the results have not yet been synthesized to provide coherent guidance on their effect in LULC classification processes, and to know limitations and promising approaches. To do so, it is necessary to synthesize the collective knowledge on this topic, as opposed to using individual experience and expertise [26]. Recently, systematic reviews of remote sensing applications have provided reliable synthesis and scientific guidance [19,27,28]. However, none focused on L8/OLI and S2/MSI data for LULC classification and LULCC detection.
Therefore, the main objective of this work is to conduct a systematic review of peer-reviewed papers published in Journal volumes from 2015, the year of the S2/MSI launch, to 2020 related to the use of L8/OLI and S2/MSI spectral bands (bands) and spectral vegetation indices (VIs) to map LULC and detect and monitor LULC changes. Our goal is specifically to: (1) synthesize the current state of knowledge, aiming to identify relevant developments and applications of classification methods concerning sensors, LULC types, classifiers, geographical regions, training sample sizes, segmentation algorithms, and other variables; and (2) summarize scientific advances for LULC classification and LULCC detection, and existing issues and limitations, providing scientific guidelines and new insights for future research regarding their use.

2. Landsat 8 /OLI and Sentinel-2/MSI Characteristics

The worldwide joint use of L8/OLI and S2/MSI data as input for LULC and LULCC applications occurs because they are: freely available [21], interoperable [29], and able to monitor large surfaces [30]. These sources are placed in sun-synchronous orbits [22] and make similar measurements in terms of spectral, spatial, and angular characteristics (Table 1).
L8, launched by National Aeronautics and Space Administration (NASA) and United States Geological Survey (USGS), has two sensors: Operational Land Imager (OLI) (30 m of spatial resolution and nine bands) and Thermal InfraRed Sensor (TIRS) (100 m of spatial resolution and two bands) [22]. The temporal resolution is 16 days and the radiometric resolution is 16 bits. The area covered by each scene is 185 × 180 km [30]. S2/MSI is a mission with two satellites (S2A/MSI and S2B/MSI launched by the European Union’s Copernicus Earth Observation program of Europe Space Agency (ESA) in 2015 and 2017, respectively) [22]. Both S2 satellites carry the MultiSpectral Instrument (MSI), a sensor containing 13 bands and spatial resolution varying between 10 and 60 m at visible to shortwave infrared (SWIR) regions [33]. The S2/MSI mission provides data with a 5-day revisit frequency, 16 bits of radiometric resolution, and swath width of 290 km [30,31]. Because of their spectral capabilities (including three bands in the Red-edge and two bands in the SWIR), S2/MSI mission provides new mapping possibilities [33,34,35]. These bands can be considered more linked to vegetation analysis than L8/OLI bands because this wavelength range is sensitive to chlorophyll content, and highly variable among different crops and different phenological states [36,37], which can be useful to derive band ratios and to calculate a wide variety of indices, useful for vegetation discrimination and LULC classification [38,39,40]. For LULC and LULCC analysis, these benefits can provide more accurate results as opposed to Landsat data which have presented shortcomings from their spatial and spectral resolutions and cloud cover interference due to its 16-day revisit interval [36].
The integration between L8/OLI and S2/MSI data provides a global average revisit interval of about 3 days [12], which allows surface monitoring with cloud-free observations in some regions [41], and the development of agricultural products at medium spatial resolution [28,42]. With the Landsat 9, the virtual constellation will become even more frequent approaching a 2-day revisit cycle [43]. This enables the development of methods considering different temporal and spatial resolutions to derive biophysical vegetation parameters and the spectral unmixing of sub-pixel fractions, which permits mapping dynamic processes at sub-hectare resolution [29].

3. Background

Systematic reviews were recently conducted concerning techniques for LULC and LULCC analysis. See et al. [44] discussed the need for the provision of spatially explicit cropland datasets at a global scale and reviewed the strengths and weaknesses of the various approaches used to develop such data. Belgiu and Drăgut [45] discussed the most used techniques for LULC classification at a regional or global level. Gómez et al. [1] presented issues and opportunities associated with the use of SITS of large-area products for LULC characterization. Khatami et al. [11] provided guidance on the performance of different supervised pixel-based image classification processes, involving algorithms and input data. Maxwell et al. [46] analyzed machine learning implementation in LULC classification models, focusing on algorithms, training data requirements, user-defined parameters, and computational costs. Wulder et al. [47] revisited the status of LULC classification with EO data to explain the new era of LULC analysis (Land Cover 2.0) enabled through free and open access data, analysis-ready data (ARD), and high-performance computing. Pandey et al. [48] analyzed the different EO platforms and datasets, spatial-spectral-temporal characteristics of satellite data, and approaches applied in LULC classification and change detection, providing recommendations to generate accurate maps. Sudmanns et al. [49] discussed the challenges from the uptake of big EO data, describing the challenges to generate consistent and systematic global information, and pointing out to the importance of web-based workflows. Whitcraft et al. [8] presented current and potential uses of EO data to support sustainable and social policies in the agricultural and food security contexts, highlighting activities of the Group on Earth Observations Global Agricultural Monitoring (GEOGLAM) Initiative. Weiss et al. [50] presented an overview of the recent techniques for providing operational services for agricultural applications. Yu et al. [51] presented guidance in the use of big EO data for accurate spatiotemporal event detection. Zeng et al. [28] summarized emerging methods based on SITS analysis for phenology detection, detailing applications to detect and estimate general and species-specific phenological stages. Phiri et al. [52] evaluated the inclusion of S2/MSI data for LULC monitoring with traditional classification methods. Despite these reviews, the knowledge of using L8/OLI and S2/MSI, showing trends, gaps, and future possibilities in the face of modern concepts, methods, and technologies, has not yet been summarized.

4. Material and Methods

We did a review of papers published in specialized Journals from 2015 to 2020, checking algorithms, classes, samples, overall accuracies (OA), and others. Some classical references of concepts, and the reference of VIs were incorporated. Our information sources were the following databases: AGORA (FAO); Current Contents–Physical Chemical & Earth Sciences (Clarivate Analytics); DBLP Computer Science Bibliography (Universität Trier); DOAJ—Directory of Open Access Journals; Ei Compendex/Engineering Village (Elsevier); Genamics JournalSeek; HINARI (WHO); Inspec (IET); Journal Citation Reports/Science Edition (Clarivate Analytics); Science Citation Index Expanded—Web of Science (Clarivate Analytics); Scopus (Elsevier), and Web of Science (Clarivate Analytics).
In order to perform the protocol and registration to integrate the papers, linking sources, eligibility criteria, selection, and data collection, we adopted the “Preferred Reporting Items for Systematic reviews and Meta-analyses” (PRISMA) [53] and the “Protocol, Search, Appraisal, Synthesis, Analysis, Research Protocol, and Reporting results” (PSALSAR) [54] methods. The search strategy involved the determination of hierarchical topics to guide the selection of papers. We organized them in the Mendeley platform (https://www.mendeley.com/) considering the following topics: “Land use and land cover mapping”, “Sentinel-2 and Landsat 8 time series”, “Pixel- and Object-based image analysis with Sentinel-2 and Landsat 8”, “Landsat 8 and Sentinel-2 integration”, “Red-edge and SWIR-associated vegetation indices”, “Random Forest (RF)”, and “Support Vector Machine (SVM)”. We designed a schematic process based on Ma et al. [26] to filter, select, and organize the papers (Table 2).
We selected the papers according to their scope and objectives, and characteristics such as the geographic area mapped, sampling strategy, number of samples and classes, classification method, and accuracy. We especially selected papers of the agricultural or natural vegetation domain because of their dynamics and heterogeneity, favoring innovative and reproducible methods that permit the reasoning about landscape dynamics, considering the capability to distinguish different classes. We previously selected 989 papers, and we included 193 of them in this systematic review (Figure 1).
Based on Bajocco et al. [55], a text mining analysis through term network extraction using the VOSviewer software was performed on 193 recent papers to analyze the evolution of the research fields in the evaluated period. Including classical references, we cited 221 papers and 2 web sites.

5. Results and Discussion

The 193 recent papers from 2015 to 2020 were developed by research groups of 35 countries (Figure 2), according to the affiliation of the principal author. Those with over 10 publications include, respectively, USA ( 43), China ( 24), Germany (20), France (13), and Brazil (12).
The main sources of the 221 peer-reviewed publications were the journals: Remote Sensing of Environment ( 58 papers), Remote Sensing ( 48 papers), and ISPRS Journal of Photogrammetry and Remote Sensing (16 papers). The top 15 journals are presented in Figure 3.
We noted the prevalence of SITS, multi-temporal analysis, and two principal classification approaches: (a) Pixel-based, by using SITS for monitoring pixel trajectories along the time, and (b) Object-based, by the geographic object-based image analysis (GEOBIA), evaluating pixel neighborhood relations for segmentation and classification, which are often used in conjunction with robust machine learning algorithms. We observed the growing use of phenological metrics and deep learning methods. The text mining analysis highlighted the occurrence and link strength of each term (Figure 4).
The network reveals a high connection among relevant and different features, techniques, and methods used as a strategy to address LULC-related purposes. The strength of the connections reinforces the association of ’land use’, ’land cover’, and ’land use change’ with ’machine learning’ techniques, ’time series analysis’, and object-based (’OBIA’, ’GEOBIA’, ’segmentation’) approaches, as well as ’surface reflectance’, ’big data’, ’data fusion’ (’analysis-ready data’, ’data cube’), ’vegetation indices’, and ’data mining’ for ’image classification’. By text mining, it was also was possible to detect the evolution of the research terms occurrence throughout the period of analysis (Figure 5).
The temporal analysis of the research terms occurrence in the network reveals the consolidation of target covers (’agriculture’, ’grassland’, ’deforestation’) and the connection increase among different features, techniques, and methods for LULC classification (i.e., the link among ’vegetation indices’, ’time series analysis’, ’cloud computing’). The connections reinforce innovative approaches with the association of ’red-edge’ and ’land surface phenology’ with ’machine learning’ techniques, ’satellite image time series’ and object-based (’OBIA’, ’GEOBIA’, ’segmentation’) approaches, and the integrated use of ’surface reflectance’, ’analysis-ready data’, ’data cube’, ’vegetation indices’, and metrics related to ’phenology’. Considering the dynamism of the highlighted target covers and the robustness of the features, techniques, and methods applied, it is reasonable to think that study areas pertaining to countries with a heterogeneous landscape are more explored to conduct the applications (Figure 6). Prevalent countries were marked by high dynamic activities (i.e., agriculture, deforestation, and urban expansion) with a heterogeneous gradient of vegetation types, which are attractive for LULC and LULCC-related purposes. In addition, the presence of developing countries is a reflex of the support to the national reporting capacities for the Sustainable Development Goal, from the publicization and open data policy to use remote sensing data.
These papers were summarized to provide knowledge on the consolidated L8/OLI and S2/MSI data application, discussing gaps, challenges, and suggesting future directions with examples.

5.1. Consolidated Trends

After the launch of L8/OLI and S2/MSI, an increasing trend in the medium-resolution SITS exploitation for LULC classification was detected [56]. In theory, every month S2/MSI can provide three observations and L8/OLI two, providing five monthly medium resolution observations, benefiting the monitoring of abrupt changes [21]. This higher temporal resolution of global observations enabled the concept of dense SITS for surface monitoring [25], which increased the use of cloud computing platforms (i.e., Amazon Web Services—AWS and Google Earth Engine—GEE) to process big EO data [49,57]. Due to flexibility, efficiency, and availability of with cloud-based processing, these platforms optimized the on-demand data access, bulk downloading, ingesting, and processing capabilities [58]. They also approximated the pixel- and object-based approaches for LULC mapping at different geographical scales, by encompassing segmentation and classification techniques [50,59]. Critical reviews discussed the integration between these approaches [1,26,60].

5.1.1. SITS and Pixel-Based Approaches

Dense SITS derived from big EO medium-resolution data have been used to improve LULC classifications, since they permit the detection of phenological variation over specific periods, such as growing seasons [5,61,62]. Being suitable to deal with the crop dynamics [63] and the heterogeneity of natural areas [64], the SITS analysis can potentialize the use of VI information as input in LULC classification methods, improving the separability of classes [5]. In addition, considering LULCC as a dynamic process, without guarantee of spatial-coherence along the time, SITS are used to detect specific changes by analyzing pixel reflectance trajectories [9,65]. The effectiveness of SITS analysis needs to address the gap of training sample data, which can be overcome by using reference data from the past, and low and irregular temporal availability of satellite images caused by a high presence of clouds, and the variability of pseudo periodic phenomena (e.g., vegetation cycles) [10]. Since more medium-resolution datasets have become available, the above-mentioned data problem can be addressed [35]. Several papers have pointed out the importance of the pixel-based approach, using L8/OLI and S2/MSI data as input in classification models to provide accurate LULC maps. Liu et al. [66] integrated L8/OLI and S2/MSI SITS using the GEE platform for mapping crop types at a 30-m spatial resolution over seven regions of China in the 2016–2017 harvest season, achieving an OA of 93%. In addition, integrating L8/OLI and S2/MSI on GEE, Xiong et al. [67] developed an approach for cropland mapping in Africa, in 2015–2016, using an object-based segmentation and selecting training samples to create pixel-based classifications. Using RF and SVM algorithms, the OA was equal to 94% mapping croplands. Piedelobo et al. [30] explored integrated L8/OLI and S2/MSI SITS for crop classification in the Duero river basin, Spain, for the 2017 spring and summer seasons, achieving an OA equal to 87% and 92% when classifying 15 crop types individually and grouped, respectively.
Other approaches have emerged. Combining S2/MSI pixel- and object-based SITS as input in RF and Time-Weighted Dynamic Time Warping (TWDTW) algorithms, Belgiu and Csillik [65] classified seven crop types in study areas in Romania, Italy, and in the USA. The pixel-based RF obtained an OA between 87% and 97%, and the object-based TWDTW obtained an OA between 78% and 96%. Using phenological metrics derived from dense Landsat EVI SITS in a pixel-based approach, Bendini et al. [68] classified croplands in the Cerrado biome, Brazil, between 2013 and 2017, using RF and a hierarchical classification model from land cover to 16 crop rotation classes, getting accuracies higher than 90%. Sharma et al. [69] proposed a patch-based Convolutional Neural Network (CNN) for L8/OLI data considering the spatial relation between a pixel and its neighborhood, and patch-based samples from multidimensional data. The method outperformed pixel-based neural network, pixel-based CNN, and patch-based neural network OAs by 24.36%, 24.23%, and 11.52%, respectively, considering eight LULC classes. Using SITS for monitoring phenological stages, Nasrallah et al. [70] mapped winter wheat areas up to six weeks before harvest in Lebanon, in 2016 and 2017. The OA was 87.0% in 2017, distinguishing wheat from similar winter cereal crops. Other papers were developed by using pixel-based approaches [18,71,72], remarking their potential for monitoring heterogeneous landscapes.

5.1.2. SITS and Geographical Object-Based Approaches

The geographic object-based image analysis (GEOBIA) expands the analysis over the pixel reflectance information, segmenting features; dividing images in geo-objects selected by shape, compactness, and texture; collecting samples; and performing classifications using geo-objects as a basic unit of analysis [73]. GEOBIA minimizes the within-class spectral variability by assigning all pixels in the object to an identical LULC class, exploring the spatial information implicit within orbital images, and integrating contextual and semantic relationships among geo-objects [60]. With the rise of Landsat-like application, the object-based approach performed better than the pixel-based one in urban [74], agricultural [26,65], and natural areas [75], mitigating granular effects and improving the change detection [61,76]. However, this approach remains rarely employed to analyze SITS data [75]. In the agricultural context, for example, despite the advantages to delineate crop fields, the implementation of object-based frameworks for the spatio-temporal analysis of medium-resolution imagery is limited [77]. The major difficulty is the alignment of objects along the time because of the agricultural landscape dynamics, which do not necessarily improve crop classification by using object-based methods [76].
Nevertheless, some alternatives were developed using the object-based approach and SITS. Petitjean et al. [10] proposed the enrichment of pixel SITS with contextual features, strengthening the connection among pixels from the same segment. Toure et al. [74] showed the suitability of a geographic object-based image change analysis (GEOBICA) with L8/OLI and S2/MSI images for detecting urban changes. Khiali et al. [78] developed an object-oriented method to analyze L8/OLI SITS, identifying and grouping spatio-temporal entities that evolve similarly. They highlighted that combining GEOBIA, VIs, and bands allows for discriminating similar spatio-temporal phenomena better than using only pixel’s reflectance and bands. Watkins and van Niekerk [79] attested the potential of object-based approaches to delineate crop field boundaries in multi-temporal S2/MSI data to improve crop mapping in a heterogeneous landscape. Graesser and Ramankutty [80] illustrated how Landsat object edges and VIs SITS can improve cropland mapping in South America, presenting low geometric errors of crop field estimates. Yin et al. [81] integrated temporal and spatial segmentation of Landsat SITS to generate agricultural objects, aiming to improve the land abandonment detection in the Caucasus region. The change detection using geo-objects achieved an OA equal to 97% for mapping six classes, performing better than pixel-level change detection (82%). Shang et al. [82] compared object- and pixel-based approaches to map 15 LULC classes in Beijing, China, using L8/OLI data to explore the influence of repeated sampling on classifications considering different training sample sizes. The object-based approach performed better, and repeated sampling led to increasing accuracies. Sánchez-Espinosa and Schröder [36] integrated L8/OLI and S2/MSI data and applied an object-based classification method to improve the separability of 12 LULC classes in Andalucia, Spain, achieving an OA equal to 88%.

5.1.3. Conventional Spectral Bands and VIs

Combining bands and VIs provide accurate LULC classification and LULCC detection results [19,71,83]. Among L8/OLI and S2/MSI bands, Green, Red, and Red-edge are useful because they correlate with chlorophyll and other pigments, NIR bands are useful to determine leaf structure information, while SWIR is sensitive to water content in vegetation and vegetation structure [38,84,85]. Of these bands, Red, Green, and NIR bands are the conventional ones for VI calculation [86]. VIs are indicators of photosynthetic activity and vegetative vigor which are used to assess variations in the physiological states and biophysical properties of vegetation, mainly through SITS analysis [86]. Nowadays, several applications comprise the use of VI-based approaches for assessing the heterogeneous and continuous nature of the land cover composition and change processes [87].
We observed that the use of VIs derived from open data, before the large availability of S2/MSI data, often focused on the reflectance of visible and NIR: (1) Normalized Difference Vegetation Index (NDVI) [88], a chlorophyll sensitive index sufficiently stable to compare seasonal changes in vegetation growth [89], (2) Soil-Adjusted Vegetation Index (SAVI) [90] that minimizes soil brightness influences with a correction factor in the NDVI formula [89], (3) Enhanced Vegetation Index (EVI) [89], with corrections for atmospheric noise, which permits less-saturation of high biomass [91], and (4) Normalized Difference Water Index (NDWI) [92], useful to measure vegetation water status [93]. The predominance of these VIs is justified by their functionality and presence of compound bands in many sensors [40,86]. Variations were formulated to reduce problems. The Green NDVI (GNDVI) [94] introduced more sensitivity to chlorophyll-a content than that of the NDVI, making it useful to detect stressed and senescent vegetation as well as to estimate green crop [95]. Different SAVI-derived VIs were created to mitigate soil background effects in the reflectance signal during the green-up phenological phase, as the Modified SAVI (MSAVI) [96], and the Optimized SAVI (OSAVI) [97]. The EVI-2 [98] was proposed to reduce the influence of soil reflectance on the EVI. For water detection, the NDWI using Green and NIR [99], and the Modified NDWI (MNDWI) [100], were developed.
Conventional VIs remain widely applied with L8/OLI and S2/MSI bands. Parente et al. [101] classified pasturelands in Brazil using only L8/OLI NDVI into an RF algorithm, obtaining an OA equal to 87% separating this class. Teluguntla et al. [72] developed a cropland classification for Australia and China, using NDVI and bands of L8/OLI into an RF algorithm, achieving a coefficient of determination of 0.85. Huang et al. [18] integrated NDVI SITS and textural features from S2/MSI to perform LULC classification in cloud prone areas of Thailand, using RF. The OA reached 89% classifying seven classes, highlighting forests and increasing the separability between crops and other vegetation types. Skakun et al. [42] integrated L8/OLI and S2/MSI for winter crop mapping and winter wheat assessment at a regional scale, considering NDVI SITS and crop calendar, achieving an OA equal to 90% separating winter crops. Palchowdhuri et al. [102] attested the relevance of NDVI, GNDVI, and SAVI in a multi-temporal crop type classification in Coalville, United Kingdom using RF and decision tree algorithms, with an OA equal to 91% classifying 16 classes. Mapping eight crop types in the Yellow River Delta, between 1985 and 2015, using NDVI, SAVI, MNDWI, and textural features from multi-temporal Landsat data and RF classifier, Feng et al. [6] obtained an average OA equal to 89%. Sonobe et al. [103] used L8/OLI bands and conventional VIs (including NDVI, EVI, and SAVI) to classify six crop types in Hokkaido, Japan, achieving an OA equal to 94.5%. Zhong et al. [104] obtained relevant results using EVI SITS to classify summer crops in California using deep learning algorithms. Despite these results, most of the conventional VIs associated with Red and NIR regions of the electromagnetic spectrum commonly saturate at moderate to dense canopies [37,58].

5.1.4. Landsat 8 and Sentinel-2 Data Integration

In an era marked by improvements in methods for LULC and LULCC analysis, the aggregation of spectral features is a trend to be further explored because integrating multispectral sensors’ data improves monitoring effectiveness [105]. The use of multi-source data is a current need, especially to detect crop dynamics [19], and the L8/OLI and S2/MSI are useful not only for characterizing land use across large areas but also for promoting monitoring at crop field level over large areas [33]. Integration techniques have been used to improve the spatial resolution of SITS data sets needed for phenological characterization [28]. The integration between L8/OLI and S2/MSI data is a recent trend [22,106,107], and has been applied to develop dense SITS for improving temporal resolution and LULC classification approaches [108,109]. The open access to L8/OLI and S2/MSI data, the similar wavelengths, and the same geographic coordinate system provide an excellent opportunity to integrate these data to produce SITS with a consistent spatial resolution for continuous global monitoring, enabling the mapping of cloud prone areas and the downscaling of the observed 30 m L8/OLI data to 10 m [21].
The Harmonized Landsat Sentinel-2 (HLS) product, which integrates L8/OLI and S2/MSI data inputs into a unique consistent dataset with higher temporal frequency (2–4 days) and a standardized 30 m spatial resolution, has focused on the necessary radiometric and geometric corrections to generate a seamless surface reflectance product, based on methods for atmospheric correction, cloud/shadow screening, geometric resampling, geographic registration, corrections for bandpass difference, and bidirectional reflectance distribution function (BRDF) effects [22,106]. However, technical challenges still remain. Pixel-level co-registration between L8/OLI and S2/MSI have been challenging and approaches to register the data and to provide atmospheric corrections, cloud, and cloud shadow masking algorithms are being developed for both sensors [42,56,77]. The differing swath width and orbit tracks result in sun- and view-angle variability, causing non-negligible reflectance variations for certain applications [110]. There may be substantial regional variation implicit in such adjustments, which should be considered when applied regionally [111]. Despite this issue, HLS products have been recently used for different applications. On the national-scale across Germany, its quality was attested to classify vegetation classes and to map the frequency of grassland mowing events, respectively, showing the value of multi-sensor SITS for characterizing LULCC intensity across large areas [107,112]. Zhou et al. [113] also assessed the potential of HLS to characterize grasslands, observing its high sensitivity at the start of the growing season. Pastick et al. [105] monitored land-surface phenology in drylands using a regression tree modeling framework to integrate information of HLS products. They characterized seasonal variations with a substantial agreement between observed and predicted values ( R 2 0.98), showing enhanced monitoring capabilities derived from HLS data. Classifying crop types in the USA, Torbick et al. [114] attested the ability of HLS products to improve information about crop location and extent within the crop season, achieving accuracies of above 85%. Hao et al. [115] evaluated HLS products to map single- and double-crop in study areas of the USA, South Africa, India, and China, achieving OAs higher than 95% with 15-day harmonized NDVI and EVI SITS.
Other projects are advancing in L8/OLI and S2/MSI data integration. Frantz [116] developed the FORCE (Framework for Operational Radiometric Correction for Environmental monitoring) for the mass-processing and analysis of Landsat and S2/MSI images, generating highly ARD (hARD) and products with cloud masking, radiometric and topographic or BRDF correction, aiming to solve spatio-temporal inconsistencies caused by cloud cover. The European Space Agency (ESA) funded two programs: Sen2like [117] and Sentinel-2 for Agriculture (Sen2-Agri) [118], to develop open-source systems based on SITS analysis methods for crop mapping and monitoring, which can be globally applied. Providing harmonized L8/OLI and S2/MSI ARD, Sen2-Agri aims to address crop monitoring needs and to support national reports for the “Sustainable Development Goal 2—Zero Hunger” (SDG-2) and the GEOGLAM initiative [119]. Sen2like aims to increase the acquisitions of this virtual constellation (95 products/year) by 30% to S2/MSI only acquisitions, promoting dense SITS [117]. Tian et al. [120] developed a phenology-based algorithm integrating S2/MSI, Landsat 7, and L8/OLI NDVI SITS to classify winter crops in China. The approach filled gaps derived from cloud interference and reduced differences in winter crop phenologies, demonstrating that it is unnecessary to use images of the total length of the cycles. The OA was 89%. Feng et al. [121] proposed a CNN for integrating multi-temporal and multi-sensor data to improve coastal LULC classifications. Results with an OA equal to 93.78% when mapping 11 LULC classes showed that including multi-temporal data improves accuracy by 6.85%, while multi-sensor data contribute to 3.24% of the increase.
Methods to calibrate reflectance differences between L8/OLI and S2/MSI are in development [22,118,122,123]. In addition, the processing system baselines and surface reflectance approach for both data sets, in particular, the more recent S2/MSI data [33], have frequently changed, and novel algorithms are being developed to mitigate these issues [43]. Projects such as HLS, FORCE, Sen2-Agri, and Sen2like are in development, but they can already can provide robust medium-resolution data to be used in LULC and LULCC approaches, endorsing that, since both L8/OLI and S2/MSI provide precision information, they should coexist and be used jointly to increase data availability [36].

5.2. Gaps and Challenges

The harmonized use of L8/OLI and S2/MSI has helped to correct historical gaps and uncertainties in LULC analysis [28]; for example, the long interval between two valid observations of the same point and the LULC mixing in the pixels lying on change boundaries [124]. However, three crucial gaps still limit LULC classification in heterogeneous and large landscapes: (i) the scarcity of field samples to train classifiers, (ii) the big data processing, and (iii) the difficulty to standardize both L8/OLI and S2/MSI. In addition, the accuracy of LULC classification and LULCC detection still depends on the performances of the classification methods [50]. Different results demonstrate that, by addressing or mitigating the above-mentioned crucial gaps, the classifiers can perform classification tasks, which makes this problem more manageable.

5.2.1. The Gap of Representative Samples

The growing volume and availability of remote sensing data allows the multi-dimensional analysis to generate LULC classification. However, even with medium-resolution data, performing classification with high accuracy, especially for vegetation analysis, depends on the quality, quantity, and the temporal continuity of the training set [61,125]. As large-scale mapping approaches rapidly advance, addressing landscape heterogeneities without causing distortions and biases on accuracy poses a challenge. This topic is highly quoted, and discussions have concluded that representative sample datasets make all the difference in the improvement of classifications by different methods [126]. Therefore, the training dataset composition remains an issue, especially in heterogeneous areas with scarce data [127]. Different authors discussed the impact of the low availability of representative samples to perform accurate LULC classification. Waldner et al. [128] showed that the LULC classification performance is more dependent on the cropping systems data than on the classification method. Considering that agricultural landscapes with similar crop cover composition have similar phenological development, the sampling strategy is crucial to obtain representative landscape samples. Incorporating more spectral features is important. However, that alone does not help the classifier, so it becomes interesting to stratify training samples according to the different targets [71]. This represents that not only the absence of samples to train the algorithm, but also the absence of representative samples for each class can be detrimental to classification results [129].
Training samples are usually collected from field surveys that follow statistical and spatial guidelines to determine the data acquisition [3,33,83,130,131]. Seeing the challenges and the importance of this process in the classification model performance, Fowler et al. [132] developed a tool to guide data collection considering a normalized Moran’s I index as a useful indicator for choosing the survey route, and the most important fields to survey on that route. However, field surveys involve highly associated costs, in terms of money and time, not to mention that visual interpretation can be difficult when images or photos are not available, which may cause biases [125,132]. An alternative is the use of unsupervised classification methods [133]. These methods are often vulnerable to outliers, high dimensionality, and noisy features, and require the user to input unknown information about the data. However, their advantage is the ability to create crop type maps where little to no field-level ground data exists [13]. Other effective options include deriving training samples from existing LULC products [134,135], repeating sampling and data augmentation to expand the training sample [82], clustering methods to assess samples [16], and using expert knowledge through visual interpretation of other products, such as high-resolution images from Google Earth and aerial photographs [17].

5.2.2. Challenges to Deal with Big Data and Landsat-Sentinel’s Data Integration

The term ‘big Earth data’ emerged to describe massive EO datasets that confront analysts and their traditional workflows with a range of challenges [49]. The use of L8/OLI and S2/MSI for providing large-scale LULC classification requires a big dataset of EO data, with the processing of a significant number of orbital images, in which a complex computational infrastructure is needed to store, manage, and process the data [136]. The needs of processing capacity, file storage, and work time represent difficulties when working with long SITS and large study areas [36]. Understanding this and stimulating science users to ‘bring their application to the data’ prompt, many initiatives to build high-performance computing systems that host large satellite data collections to enable large volume processing [47], as the Australian Geoscience Data Cube [137], the Swiss Data Cube [57], and the Brazil Data Cube (http://brazildatacube.org/pt/pagina-inicial-2/). The Committee on Earth Observation Satellites (CEOS) also suggests portable open-access data cube architectures to facilitate the use of EO data and to share knowledge [47]. Alternatively, optimal dates based on crop calendars can reduce the data volume for mapping LULC classes with marked seasonality [138]. However, this depends on the data availability and may fail when images of key crop developmental stages of the crops are not available [34,139].
For the first time, the applied science community has had access to systematic and near-daily medium-resolution optical data [114]. Using data provided by different sensors can improve the temporal resolution of SITS, fill data gaps, and improve the quality of the Earth’s surface monitoring [140]. However, how to effectively integrate the information provided by different sensors remains an unsolved problem in the science of remote sensing [141], due to different spectral or spatial sensor characteristics, acquisition geometries or illumination conditions, or atmospheric state [142], leading to inconsistencies in products derived from multi-sensor approaches [140]. This issue requires the development of new ways to extract landscape information from big EO data sets [136]. Researchers have been engaged in finding ways to use S2/MSI and Landsat data together. One promising source for driving the next-generation of products is the L8/OLI and S2/MSI harmonization [114], and some groups lead this task [7]. The NASA Multi-source Land Imaging (MuSLI) program has supported land science products in diverse areas, including burned area mapping, forest phenology studies, and fractional water characterization [114]. Similar work is underway in Europe through ESA, Copernicus services, and national monitoring activities complemented by the NASA/USGS Landsat Calibration Team and the ESA S2 Validation Team [118]. Often, the harmonization approach has five steps: (1) grid imagery to a common pixel resolution, projection, and spatial extent (tile); (2) atmospheric correction; (3) cloud mask; (4) adjustments to represent the response from a common spectral bandpass and nadir BRDF-normalization keying off a single, global and constant BRDF shape [110,142]; (5) performance of geographic co-registration to warp and co-register data [22,116,140]. In recent experiments, harmonized data are being re-projected and data cubed [116,140]. However, a variety of technical challenges remain without a consensus approach, concerning co-registration, atmospheric correction, cloud, and shadow masking [56], sun- and view-angle variability, reflectance differences calibration [22,123], and the processing baselines and surface reflectance approach for both datasets.

5.3. Trends to Be Further Explored

Emerging opportunities have the potential to further advance LULC and LULCC analysis (Figure 7). The dissemination of open science principles can favor the sharing of sample datasets, including in situ samples [44], a common practice that leads to progress in other scientific areas such as medicine. The publication of in situ sample datasets in open repositories as GitHub (https://github.com/), Pangaea (https://www.pangaea.de), or Zenodo (https://zenodo.org/), for example, has been growing, and the data can allow more accurate classifications [7,16,107]. This is part of the “Land Cover 2.0 era” described by Wulder et al. [47], where the transparency on the input data, algorithms, and the adoption, implementation, and communication of rigorous accuracy assessment protocols are reinforced.
Currently, in situ data are not always available. In addition, in a long-term multi-temporal analysis approach, it is difficult to have samples for each year of interest [124,125]. Various approaches have been used to overcome the gap of training samples, such as semi-supervised learning or active learning. However, they depend on preexisting labeled samples which require user expertise [125]. Especially for the agricultural sector, knowing crop distribution is useful for management and planning decisions. The choice of a sample during the data collection step impacts on a model’s performance, but it is usually made via roadside surveys throughout the study area [132]. This becomes important when exploring ways to obtain representative training samples, and the open data policy with data sharing can address it. Consequently, it can boost the use of less frequent VIs, phenological metrics, hierarchical classifications, harmonized L8/OLI-S2/MSI data, environmental variables, and other strategies that deal with classification problems [71,126,129].

5.3.1. Less Frequently Used Spectral Bands and VIs

Despite the advantages derived from the integrated use of L8/OLI and S2/MSI, large data volume and different characteristics of recent sensors introduce substantial challenges to store, to develop classification approaches, and to improve the reasoning about LULC dynamics [12], suggesting that we should explore innovative perspectives of analysis. An emerging trend, as an alternative, is to expand the exploitation of the electromagnetic spectrum. It is reasonable to suppose that better classification accuracies can be achieved if all bands throughout the SITS are used as features, instead of a greenness-related spectral index formulated from only two bands per image [71] because the full optical spectrum comprises wavelengths which are sensitive to vegetation properties other than greenness [62].
A common practice is the use of bands beyond the visible and NIR as input for LULC classification approaches. However, it was restricted to a few research groups for a long time due to costs associated with image acquisitions, storage and access, and the required specialized skills and software to transform imagery into actionable information [57,143]. In recent years, the exploring of the electromagnetic spectrum by the broader science community has expanded with the advent of Earth Observation missions characterized by global coverage and medium spatial resolution data, especially Landsat-like (10–30 m) missions, as the L8/OLI and the S2/MSI, which jointly have provided global diffusion of narrow and sensitive spectral bands within this range of spatial resolution [28,40]. The most highlighted bands in L8/OLI and S2/MSI are the SWIR and Red-edge, and VIs based on them have increased classification accuracies in different landscapes around the world [85,127,144]. Most of these VIs were little explored for a long time [40] but turned on only after the S2/MSI mission advent. Vuolo et al. [145] attested the value of S2/MSI multi-temporal data for crop type classification, analyzing nine crop types in Austria using and RF algorithm, getting an OA equal to 96% in 2016 and 2017. Sonobe et al. [40] evaluated 82 VIs calculated from S2/MSI data to identify six crop types using RF and SVM, recommending the application of less frequently used VIs with an OA higher than 89%. Immitzer et al. [146] observed the importance of S2/MSI’s SWIR and Red-edge bands to classify vegetation species and crops by using the RF algorithm. Ramoelo et al. [147] verified that combining S2/MSI’s SWIR and the first Red-edge (band 5) achieved the highest importance for mapping the African Savannah. Macintyre et al. [148] identified SWIR and first Red-edge as the most informative S2/MSI bands for vegetation classification in Australia. Persson et al. [149] used multi-temporal S2/MSI data to classify five tree species in Sweden with RF, highlighting Red-edge and SWIR to detect phenological differences, with an OA equal to 88.2%. Sothe et al. [150] evaluated the performance of L8/OLI and S2/MSI data to map successional forest stages in the Atlantic rainforest, Brazil, with SVM and RF, and considered that Red-edge and SWIR bands were decisive attributes to differentiate similar phenologies. Abdi [151] highlighted the importance of Red-edge and SWIR of S2/MSI to map boreal landscapes.
Influenced by plant constituents such as pigments, leaf water, and biochemical components, SWIR bands have been used as a foliar water content indicator [62]; to detect drought stress, and to identify forest disturbances [84]. Nevertheless, the most published VIs based on SWIR have been little applied for LULC classification [152], although, when used, they presented good results. Jakimow et al. [153] integrated different SWIR VIs, such as Normalized Difference Moisture Index (NDMI) [154], Normalized Burn Ratio (NBR) [155], and NBR-2 [155], with NDVI, EVI, and SAVI in an RF model to monitor pasturelands in the Brazilian Amazon using Landsat time series. SWIR VIs improved the detection of burned pastures, burned secondary regrowth, and tilled pastures. Müller et al. [156] extracted metrics from Landsat NDVI time series and developed a Short-Wave Infra-Red Index (SWIR-Index) to separate cropland, pasture, and natural savanna vegetation in the Brazilian Cerrado by a spectral-temporal approach. SWIR bands showed more importance among 30 features to the RF model, and the classification achieved an OA equal to 93% for six classes. SWIR and associated VIs also allowed the accurate separability between foreground-background objects, such as grassland and crops [157], and improved classifications accuracies in heterogeneous landscapes based on S2/MSI [146,158] and L8/OLI time series [62]. Using Landsat data from 2000 to 2015, Cai et al. [159] considered SWIR bands and the associated VI Land Surface Water Index (LSWI) [160] the most useful information to classify maize and soybean, with an OA equal to 96%. Using S2/MSI data and the RF algorithm to identify optimal temporal windows for crop mapping in southern China, Meng et al. [161] showed that the use of all images during a cycle of crop growth is not necessary, considering the later stages and the SWIR and NIR bands more useful. Assessing 28 VIs, Immitzer et al. [162] demonstrate the suitability of S2/MSI SWIR bands to separate broad LULC classes and tree species. SWIR-based VIs were used to classify built-up and bare lands [163,164].
The term Red-edge refers to the abrupt reflectance rise caused by vegetation within the electromagnetic spectrum from 680 nm to 750 nm, which is related to two optical effects of plant tissue: (a) strong chlorophyll absorption, causing low Red reflectance, and (b) high internal leaf scattering, causing large NIR reflectance [165]. This spectral region is correlated with photosynthetic capacity and is an important measurement of chlorophyll content and vegetation health [95]. The presence of Red-edge bands in the S2/MSI mission expanded their use for LULC classification, representing a significant spectral enrichment regarding other sensors used in LULC classification [166]. Especially in circumstances of saturation, Red-edge bands have the potential to improve the accuracy of biomass and crop yield estimates [58]. It has been reported that the Red-edge band close to Red wavelengths (band 5) is related to the difference in chlorophyll content, while the one close to NIR (band 7) is correlated to variations in leaf structure [38]. This suggests that the separability of Red-edge features lies in both leaf structure and chlorophyll content of different vegetation species [85], which is suitable to map heterogeneous landscapes, especially by using Red-edge-associated VIs [166].
Red-edge bands and associated VIs have been applied to estimate biophysical variables in vegetation, as LAI, Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), and Gross Primary Productivity (GPP) [84,167,168,169], suggesting that VIs formulated with Red-edge bands can reduce the saturation when compared with VIs derived from the Red band. VIs like these use the Red-edge region of the electromagnetic spectrum, taking as chlorophyll concentration contained in vegetation, strongly correlates with it [84]. Several papers demonstrate that S2/MSI Red-edge bands and associated VIs have provided more accurate results in LULC classification, thus improving overall and class-specific accuracies. For Palchowdhuri et al. [102], the inclusion of Red-edge bands in a LULC classification model positively impacted on vegetation classes and improved OA. Immitzer et al. [146] showed Red-edge as the most important S2/MSI bands for crop discrimination over an Austrian agricultural area. Radoux et al. [157] confirmed this assumption in southern Belgium, showing that the second and third Red-edge bands performed better than the first. Integrating L8/OLI and S2/MSI for LULC classification in West Africa, Forkuor et al. [144] showed that the use of S2/MSI Red-edge bands improved the map accuracy in 4% compared to that of L8/OLI and other S2/MSI bands. In addition, the authors reformulated the Red-edge-based VI named Normalized Difference Vegetation Index Red-edge (NDVIre) [170] formula with the Red-edge bands 5 and 6 of the S2/MSI, creating two relevant Red-edge associated VIs.
Assessing the potential of the S2/MSI to classify grass species, Shoko and Mutanga [171] attested the importance of Red-edge bands to equalize the OA with a Worldview-2 classification (high spatial resolution) considering three species. Munyati et al. [39] indicated that the Red-edge VIs Chlorophyll Index Red-edge (CIre) [172], Red-edge Inflection Point (REIP) [173], and NDVIre, and the proposed SWIR ratio index can monitor grass tissue concentrations of soil nutrients as Nitrogen, Potassium, Calcium, and Magnesium, which can be valuable features to increase the separability between natural vegetation and crop type classes. Osgouei et al. [174] showed the potential of NDVIre to separate bare land from built-up areas in Turkey using the SVM algorithm. The OA was equal to 93% mapping seven classes. Extracting spatio-temporal information in VIs time series to classify crop types in Manitoba, Canada, Niazmardi et al. [63] indicate that NDVIre performed better than NDVI and SAVI (OA 88.31%, 87.27%, and 84.36%, respectively) for six crop types using RapidEye images to formulate the VIs, and indicating that the integration among multi-sources can be more exploited to further improve field-scale crop analysis and mapping. In the work of Griffiths et al. [107], the integration between L8/OLI and S2/MSI data to generate a national-scale LULC map of Germany detecting the crop type distribution at a 30 m spatial resolution, provided an OA equal to 81% for 12 LULC classes. The inclusion of Red-edge bands improved all class-specific accuracies. Vincent et al. [175] attested the importance of the S2/MSI Red-edge bands and derived VIs (NDVIre, SAVIre, MSRre, and CIre) to map sunflower and wheat crops in India, using a c-means classification approach. Sun et al. [85] mapped six LULC classes at one level and five crop types at another level using an RF classifier and showed that including Red-edge, the OA of crop type mapping improved when compared with only conventional optical features. The OA was equal to 94.05% and 83.22%, respectively. Lambert et al. [158] attested the importance of Red-edge bands and associated VIs to map agricultural fields in Mali’s cotton belt by using an RF algorithm. All of these results emphasize the robustness of these bands and associated VIs to provide accurate mapping of different LULC classes in heterogeneous landscapes.
Although there are various studies exploring VIs for LULC mapping and monitoring, this is still very restricted to the traditional ones. There is a lack of literature focusing on summarizing combinations of several VIs for this. With L8/OLI and, especially, S2/MSI data, less frequently applied VIs have been useful for improving classification results in dynamic and heterogeneous regions. The suitability of less frequently applied VIs for mapping different LULC classes, considering the papers cited in this systematic review, is summarized in Table 3, in order to provide guidance for future applications. The formulas are presented in the Supplementary Material (Tables S1–S3).
In the future, harmonized Landsat and Sentinel data can drive the classifications, and the potential inclusion of Red-edge [29,43,105,140], as well as additional narrower bands with 10 m spatial resolution in the visible, NIR, SWIR, and Thermal Infrared regions in future Landsat missions [58], can boost the use of these and other VIs in a dense time series perspective, which will be promising to improve accuracies. However, despite the differences between the L8/OLI and S2/MSI band wavelengths, we observed that some of the cited VIs have different terminologies, but the same formula, or different formulas for the same VI. The combination NIR/first SWIR band (NIR-SWIR1)/(NIR+SWIR1) is named in the literature as Normalized Difference Infrared Index (NDII) [176], recently employed with this nomenclature [40,150,152]; LSWI also recently applied [114,144,159,177], NDMI, recently applied [129,153,178], and the frequently used NDWI developed by Gao [92]. NDWI is also the name of the VI developed by McFeeters [99], which differs from the Gao’s NDWI by using the Green band instead of SWIR. Similar to those, there is the Normalized Difference SWIR (NDSWIR) [179] and the Normalized Burn Ratio (NBR) used by Sonobe et al. [40], which use the NIR with the second SWIR band. Combining only the SWIR bands, the Normalized Difference Tillage Index (NDTI) [180] has the same formula as the Normalized Burn Ratio-2 (NBR-2): (SWIR1–SWIR2)/(SWIR1+SWIR2), applied by Jakimow et al. [153].
Red-edge associated VIs also have their nomenclature confused in some moments. The CIre, original formula (NIR)/(695 nm–740 nm)−1, developed by Gitelson et al. [172], was already formulated using NIR and Red-edge bands [37,86,168], two Red-edge bands [39,85,167,169], and Red-edge and Green bands [158]. The NDVIre (also called RENDVI) [170], formula (NIR − RE1)/(NIR + RE1), received variations along the time, as the inversion of the Red-edge position in the formula (RE1 − NIR)/(RE1 + NIR) [174] and the use of other Red-edge and NIR bands of S2/MSI [38], but these changes are rarely named differently. This nomenclature has already been partially modified to Normalized Difference Red-Edge (NDRE), being referred to as the previous NDVIre formula [86] or to the formula (RE2 − RE1)/(RE2 + RE1) and its variation NDRE2, formula (RE3 − RE1)/(RE3 + RE1), both formulations recently applied to LULC classifications [40,61,152,169]. The VI represented by the mathematical expression ’NIR/Green—1’, developed by Gitelson et al. [181], was already named Chlorophyll Index Green (CIGreen) [38,168] and Green Chlorophyll Vegetation Index (GCVI) [13,182].
Additionally, the type of approach causes differences in VIs that uses the S2/MSI NIR and visible bands. When the VI is derived from a dense time series approach combining S2/MSI and L8/OLI and resample procedures, the narrower S2/MSI NIR band 8A (855–875 nm) is more recommended, as it is more comparable to the L8/OLI NIR band 5 (851–879 nm) [42,66,111,123]. When only S2/MSI bands are used, the broad S2/MSI NIR band 8 (785–900 nm) is more recommended because 8A has a different spatial resolution (20 m) from the visible bands (10 m) [33,64,65,93,166,183]. This issues makes it necessary to revisit previous studies before choosing a VI correspondent with the research objectives.

5.3.2. Phenological Metrics

Phenology-based LULC classification approaches are useful for mapping different crop types [184] and can be important for ensuring food security and agronomic management of crops. This situation permeated the development of methods to extract phenological metrics (phenometrics) (i.e., start, peak, end, and length of a season, maximum and quartile values of spectral indices and spectral bands) to describe critical moments of phenological cycles through timing and value of specific temporal points of interest [63,104,116]. Phenometrics can be defined as the temporal characteristics that indicate crop seasonality, and have the potential to represent crop growth [185]. Zeng et al. [28] summarized different methods for extracting phenometrics, pointing out that the selection of the most appropriate method should consider the biogeographical characteristics of the study area, potential noise sources in the data, and the shape of VI time series. The current practices for large area multi-temporal LULC classification consist of deriving metrics from the time series and then classify the metrics bands [1,186]. Most of these efforts to extract phenological information from vegetation were conducted using MODIS data due to its high temporal resolution and applicability to obtain robust time series information [12,123]. However, MODIS images at 250 m or 500 m spatial resolution have problems with mixed pixels [66].
Phenology-based LULC classification using Landsat-like data is growing after the S2/MSI launch but remains challenging. We noted difficulties to extract phenometrics from L8/OLI and S2/MSI data, especially in regions marked by climatic conditions that cause signal interferences, modifying time series patterns. L8/OLI data are less explored due to the temporal resolution of 16 days, and S2/MSI data are mostly used for detecting plant pigments, discriminating different vegetation classes, mapping crop types, and estimating the yield of crops, and even within the croplands, phenometrics are not always calculated [187]. Integrating L8/OLI and S2/MSI data would significantly improve the temporal resolution of observations [41] and could provide an effective solution to the low availability and low spatial resolution; however, it is still common to integrate MODIS data to improve crop phenological cycle identification [66].
Frantz [116] presented how the FORCE framework approach has innovative ways for deriving phenometrics from dense time series. The framework is capable of deriving this information from MODIS, L8, and S2 data, and can be used as baseline products to support scientific to operational applications for environmental and agricultural monitoring purposes. To complement data products already derived from time series MODIS images, Liu et al. [66] extracted phenological information to calculate cropping intensity, presenting that the expansion of high-quality observations from the integration of L8/OLI and S2/MSI data improved the results. Wulder et al. [47] recognized the importance of phenometrics to maximize time series information and minimize noise before large area multi-temporal LULC classification, as these metrics are robust to deal with missing data and phenological variations, citing the implementation of data fusion approaches based upon blending of MODIS and Landsat data to produce synthetic gap-free composites. Zhang et al. [123] combined the MODIS MCD12Q1 500 m land cover product and phenometrics extracted from consistent and pre-processed Landsat time series to classify 16 LULC classes over North America. They obtained OAs ranging between 93.13% and 95.44% in RF approaches.
Using Landsat EVI and co-registered accumulated growing degree-days (AGDD) MODIS time series as input for LULC classification, Nguyen et al. [188] adopted a model-fitting approach to describe phenological characteristics, such as the timing of the growing season, EVI peak, and timing of the EVI peak of eight LULC types in South Dakota (USA). Fitted parameter coefficients and phenometrics at pixel-level were submitted to an RF classifier to characterize LULC for the study area. They achieved OAs over 75%, showing the potential and limitations of integrating phenometrics to improve land surface characterization and LULC classifications. Bolton et al. [12] developed the multi-sensor land surface phenology (MS-LSP) algorithm to detect seasonal variations in vegetation phenology and to calculate phenometrics in HLS EVI time series for all of North America. They compared them with metrics extracted from the PhenoCam network and the MODIS MCD12Q2 product, showing a correlation equal to 90% with PhenoCam data for 50% green-up dates for deciduous broadleaf forests. Pastick et al. [189] used MODIS and HLS NDVI image composites to extract phenometrics and then map invasive annual grass species in the western USA, in a data mining framework. They achieved high correlations to different phenological stages, improving the mapping in comparison to models that use MODIS products or monthly, seasonal, or annual HLS composites as primary inputs.
Different approaches were developed only with Landsat-like data. Son et al. [177] demonstrated the efficacy of using phenometrics extracted from multi-temporal S2/MSI data for updating rice crop maps and monitoring cropping practices over a large and heterogeneous region, by detecting key phenological dates, as transplanting and heading dates. In a deep learning-based multi-temporal crop classification in California (USA), Zhong et al. [104] presented the importance of phenometrics for improving classification accuracy. They explored neural network classifiers, XGBoost, RF and SVM, and metrics derived from Landsat 7 Thematic Mapper (L7/TM) and L8/OLI time series, achieving OAs above 82%. Schwieder et al. [109] explored phenometrics derived from dense Landsat time series to improve the classification of 7 Cerrado physiognomies, in Brazil. Their phenology-based physiognomy map achieved OAs of 63% with 10-fold cross-validation and 96% with a validation scheme to accommodate for spectral-temporal class similarities along the vegetation gradient. Using phenometrics derived from Landsat time series, Müller et al. [156] improved the separability among six classes of cropland, pasture, and natural savanna vegetation in the Brazilian Cerrado by a spectral-temporal approach. They achieved an OA equal to 93%. Schmidt et al. [190] applied several phenometrics of summer and winter growing seasons derived from Landsat data to identify crop and no-crop areas in Queensland, Australia, at field-scale. The OAs were above 90% and these metrics were among the most important variables in the classification. Griffiths et al. [107] discussed the potential of phenometrics to classify crop types with higher thematic detail from multiple crop seasons over large study areas. This was attested by Bendini et al. [68], after extracting different phenometrics as attributes from dense Landsat time series for classifying croplands in different Brazilian regions with accuracies higher than 90%.
Pena and Brenning [191] extracted phenometrics of the green-up and senescence phenological stages in L8/OLI time series to identify optimal image acquisition dates to map five fruit-tree crop types in Chile. They obtained an OA equal to 94%. Htitiou et al. [192] highlighted the potential of phenometrics derived from S2/MSI and L8/OLI NDVI smoothed time series to identify 12 LULC classes (10 crop types) over irrigated areas in Morocco using RF classification. The classifications based on S2/MSI and L8/OLI data presented OA of 93% and 90%, respectively, showing potential for detecting phenological stages of different cropping systems over semi-arid regions. Zhang et al. [193] integrated L8/OLI and HJ-1/Charge-Coupled Device (HJ-1/CCD) data in an approach to mask non-vegetation areas and then extract key phenometrics of four crop types from long-term NDVI time series profiles in Jingzhou, China. They obtained OA equal to 92%, demonstrating the potential of phenometrics-based crop classification. Although using RapidEye images, Niazmardi et al. [63] improved their results for crops classification in Canada after extracting temporal and spatial metrics from time series VIs (NDVI, NDVIre, and SAVI) as histogram-based spatio-temporal features, highlighting their importance to characterize both the temporal and spatial crop patterns.
The exploitation of deep learning techniques and phenometrics extracted from SITS can also improve LULC classifications and crop status monitoring during a growing season [131,133,194]. However, their potential has not yet been fully exploited, considering their applicability for mapping multiple cropping systems and cultivation practices in highly dynamic regions [104,156,190]. For example, there are few papers using red-edge bands to estimate phenometrics. Hence, more efforts are required for testing the efficiency of S2/MSI data in estimating phenological phases and extracting phenometrics [187].

5.3.3. Data Hierarchy and Ancillary Data

Hierarchical classification approaches can also be developed in order to obtain maps at different thematic levels, enabling, for example, the evaluation of the cropping practices and crop types independently of the use of a priori cropland mask, discriminating more vegetation species [68,83,195]. This approach can potentialize the suitability of Red-edge bands for the decomposition of vegetation types [146,162].
LULC classifications performance can also be improved by including additional input variables [11], especially those above concerning the pixel level. Pixel enrichment with geographic objects is a promising application in this case [10]. However, providing analysis across a broad geographic scale is a great challenge, and multi-temporal segmentation methods are in development to solve it and to support LULCC analysis. Meanwhile, different methods that arrange different datasets about human activities (i.e., crop management practices) and geographic conditions are being considered to provide data beyond the pixel information, improving LULCC detection.
Several studies have shown the importance of contextual ancillary variables such as topographic, textural, and climatic data to be used as metrics or features to provide information about the structure and morphology of landscape context [80,196], mitigating spectral confusion among spectrally similar classes [197,198]. Zhao et al. [197] remark the importance of topographic data to map LULC in Chile, a complex topographic region. In addition, for Chile, Zhu et al. [198] showed that a Digital Elevation Model and its derivatives (aspect, position index, and slope), water, snow, and cloud probabilities improved LULC classification. Incorporating texture metrics to increase the separability among different LULC types was highlighted by [121]. Gilbertson and van Niekerk [199] showed that incorporating textural metrics within an OBIA approach improved crop classification accuracies. Discriminating between different forest stages, Ref. [150] demonstrated the importance of textural metrics of Red-edge and SWIR bands and associated VIs. Chen et al. [17] reinforced the importance of texture as a subtle feature for differentiating cultivated lands from natural vegetation. Wang et al. [21] showed the importance of downscaling to extract more accurate texture information. Frantz [116] employed textural and spectral homogeneity metrics in the FORCE framework approach. Texture metrics are applied in the OBIA approach because the traditional per-pixel based methods are not completely suitable for understanding landscape shape and context [80]. Zeferino et al. [196] demonstrated that classification accuracies of heterogeneous areas, as Eastern Amazon, Brazil, can be improved by incorporating environmental variables (e.g., geological, pedological, climatic and topographic) in a model based on RF algorithm and L8/OLI spectral data.

5.3.4. Data Cubes and ARD

The unprecedented availability of satellite data derived from recent changes in costs of imagery and computing technology have enabled a new approach to work with big data, integrating different platforms and data providers [22,33]. One potential form for encompassing data storage and processing, network requirements, and web applications, providing interoperability, access, analytics, and the sharing of EO imagery is the organization of the data in the broadly referred data cube format [49,200]. Lewis et al. [137] defined data cube as a regular, non-overlapping, tiles of gridded sensor data, stacked according to the time of data capture (observation), leading to a data cube visual metaphor. Nativi et al. [143] considered a data cube as an infrastructure defined to allow data analysis, ready to be used, and interoperable at the programmatic level of the users. This term is widely used to refer to data integrated into multidimensional structures. Appel and Pebesma [201] considered data cube as a 3D array with two spatial dimensions, one temporal dimension, and an arbitrary number of attributes with the spatial dimensions which refer to a spatial reference system; the cells of a data cube have a constant spatial size; there is no spatial or temporal overlap between the cells; temporal reference is a known set of temporal intervals; and a cell has a vector of attributes for every combination of dimensions.
Both data cube and big data concepts converge along with remote sensing imagery into Earth Observation Data Cubes (EODC), an approach to storing, organizing, managing, and analyzing Earth Observation data [202]. The main objective of EODC is to facilitate EO data usage by addressing Volume, Velocity, Variety challenges, and providing access to large spatio-temporal data in an analysis-ready format [203]. Focusing on analyzing LULCC trajectories, EODC can be observed as a time series associated with spatially aligned partitions of space ready for analysis [201], outperforming other technologies for time series analysis of large satellite analysis-ready data (ARD) [143]. Many initiatives have been developed to build high temporal frequency data cubes to generate accurate LULC classification, such as the Australian Data Cube [137] and the Swiss Data Cube [57]. Additionally, operational platforms and technologies, such as Open Data Cube (ODC) [137], e-sensing [204], and Google Earth Engine (GEE) [205] are capable of analyzing EO data in a data cube format. These initiatives are paving the way for broadening the use of EO data to larger communities of users, supporting decision-makers with information converted into meaningful geophysical variables, and ultimately unlocking the information power of EO data [136].
Lewis et al. [137] described the creation of data cubes using surface reflectance products from remote sensing images and a software environment to manage them called the Australian Geoscience Data Cube (AGDC). This definition includes not only the data cubes of satellite imagery, but also the processes to build and the software environment required to access, prepare, index, and manage them. The AGDC was transformed into the AGDCv2, supporting other coordinate systems, formats, and data provenance, and renamed to ODC with operational deployments initiatives, such as the operational EO service called Digital Earth Australia (DEA) [206]. Giuliani et al. [57] used interoperable service chains to produce Landsat ARD for the Swiss Data Cube (SDC). To have ARD products ingested, stored in a database, and readily available, the SDC defined a workflow for ARD production, based on data acquisition, pre-processing, and conversion to surface reflectance. Augustin et al. [207] proposed the semantic EO data cube, which has for each observation at least one categorical interpretation that can be queried in the same instance. Therefore, semantic content-based queries covering an area of interest in each temporal extent are possible, such as an observed moment in time with the maximum vegetation extent. Mahecha et al. [208] introduced the concept of Earth system data cubes and implemented an integrating ARD with a suitable analytic interface, as a way to treat multiple data dimensions, and allow the effective application of user-defined functions to co-interpret EO data.
The use of ingestion, and storage of ARD is common to each data cube conceptualization and initiative, which is a result of processing satellite imagery going from data acquisition to radiometric calibration and additional conversions to Top-of-Atmosphere (TOA) reflectance, and finally to surface reflectance [22,209]. Unfortunately, different EODC initiatives are being developed using contrasting strategies to produce ARD. Consequently, getting uniform and consistent ARD remains a challenge [49]. The CEOS Analysis Ready Data for Land (CARD4L) Framework (https://ceos.org/ard) states that satellite data should meet certain requirements—including interoperability through time, space, and with other datasets—and also should allow immediate analysis with minimum effort. Dwyer et al. [203] state the importance of gridding the data to a common equal-area projection and other processing procedures to provide adequate ARD such as Top-of-Atmosphere Reflectance (ToA), Brightness Temperature, Surface Reflectance, Provisional Surface Temperature, and Quality Assessment.
Upon observation, Qiu et al. [210] evaluated the temporal consistency of the USGS Landsat ARD dataset and recommended several processing streamlines related to resampling, cloud/cloud shadow detection, Bidirectional Reflectance Distribution Function (BRDF) and topographic corrections, before using it for SITS analysis. Applying these recommendations, Potapov et al. [209] explored the concept of ARD defined by CEOS to describe the 16-day GLAD ARD product (https://glad.umd.edu/ard/home) developed to be used as input for LULC and LULCC mapping. However, despite global radiometric consistency, the product does not represent surface reflectance and is not suitable to be directly used as input for precise measurement of photosynthesis, water quality, and other variables that require actual surface reflectance, such as LULC mapping, due to the irregular frequency of clear-sky observation. Hence, they recommend considering the use of phenological and change detection metrics to overcome these issues.

5.3.5. Incorporating Radar Data

Another type of integration to be more explored is the integration of optical and Synthetic Aperture Radar (SAR) data. SAR data are useful for extracting crop planting areas and identifying crop types [131]. This technology improves the acquisition of data in decisive monitoring periods since it explores surface characteristics and is independent of solar illumination and cloud cover [114], factors that may limit the optical data acquisition. Despite this limitation, SAR data was restricted to the scientific community for a long period because of their limited availability caused by the scarcity of good digital elevation models (DEM), as well as the more complex data structures required concerning optical data [114,131]. Due to the launch of the ESA’s Sentinel-1 mission that comprises a constellation of two polar-orbiting satellites to acquire imagery regardless of the weather, and the availability of a dual polarization (VV+VH) C band (5.36 GHz) SAR data product, the use of SAR data has been diffused to mitigate the cloud interference [3] and classification limitations in heterogeneous landscapes [109]. Due to cloud cover, the optical data can present discontinuity in key growth stages of crops [158]. For crop types with similar phenological cycles, this integration can improve reliable discrimination [85]. As SAR can reflect vegetation structure, and optical imagery captures crop multi-spectral information, the synergetic use of both types of data can be complementary to each other [93], integrating SAR sensor with all-weather capability with spectral information and short revisit periods provided by the optical sensor [166].
The principal SAR features observed were Haralick textures, polarizations, backscattering coefficients, and polarimetric decompositions. Sun et al. [131] used S1, S2/MSI, and L8/OLI multi-temporal data and SVM, RF, and Artificial Neural Network algorithms to identify six crop types obtaining a high OA (0.93), especially with RF. Clerici et al. [166] integrated S1 and S2/MSI data for LULC classification in the Magdalena region, Colombia, producing a LULC map with an OA equal to 88.75% when mapping six classes using the SVM algorithm. Kussul et al. [20] used multi-temporal optical and SAR data and applied the multi-layer perceptron classifier for crop mapping in Ukraine, achieving an OA over 88% for 11 classes. Steinhausen et al. [3] integrated S1 and S2/MSI data to classify 13 LULC classes in the Chennai Basin, India, a cloud-prone monsoon region with small-scale agriculture and multiple cropping patterns, in the 2015/16 harvest period, with an OA equal to 91.53%. Poortinga et al. [211] attested that the integration between L8/OLI, S1, and S2/MSI elevated the classifications’ accuracy in Myanmar in 2017 and 2018 up to 91%. Improvements in this sense were also observed by Carrasco et al. [178] when mapping 30 LULC classes over Wales. Mercier et al. [138] used S1 and S2/MSI time series to monitor forest and agriculture mosaics over two heterogeneous landscapes: a temperate mountainous landscape in the Cantabrian Range (Spain) and a tropical forested landscape in Paragominas (Brazil). The mean Kappa increased from 0.59–0.83 to 0.55–0.85 when integrating both data. In addition, the method enables defining key periods that discriminate LULC classes. The analysis showed the importance of VV and VH polarization and SWIR bands’ data for the classification. Vincent et al. [175] also obtained positive effects optimizing the temporal date images to map sunflower fields.
Mapping wetland classes, Kaplan and Avdan [212] attested that Red-edge bands have significant influence over intensive vegetated classes, while radar bands have more influence over partially decayed vegetated ones. Chatziantoniou et al. [183] evaluated the integrated use of S1 and S2/MSI data with the SVM classifier for mapping wetlands in the National Park of Koronia and Volvi Lakes, in Greece. The OA was equal to 94.82% dividing pasture types into three classes; single-species, two-species composition and multi-species composition, attesting the suitability of S1 and S2/MSI data for improving LULC classification in humid areas. For mangrove areas, less frequent VIs have presented interesting advances in classification workflows [213,214,215]. Crabbe et al. [216] demonstrated that integrating S1 and S2/MSI features improved the classification of pasturelands in Australia. The OA using K-Nearest Neighbours (KNN) classifier was 0.89, while 0.96 and 0.93 when using RF and SVM, respectively, mapping six classes. Sonobe et al. [217] mapped six crop types in Japan using images of the 2016 growing season with an OA of 96.8% applying a kernel-based extreme learning machine (ELM) classification model, attesting the relevance of VV polarization and Red band. Improvements in classification performance were also noted by Sun et al. [85] evaluating the synergistic use of S1 and S2/MSI time series for oasis crop type discrimination in China. Cai et al. [182] proposed an object-based method for paddy rice mapping using S1 and S2/MSI time series deriving phenological parameters. Integrating S2/MSI NDVI, S1 backscattering time series, and phenology data, the OA was 95% for rice and four other classes, providing suitability for rice mapping in cloudy and rainy areas.
Different methodologies integrated L8/OLI, S1, and S2/MSI data into RF and SVM approaches to map rice in heterogeneous and cloud prone areas in China. The results indicated that the integration of VH polarization and spectral indices can produce accurate rice distribution maps in these environments [218,219]. Ienco et al. [141] presented a deep learning architecture to integrate S1 and S2/MSI time series, namely TWINNS (TWIn Neural Networks for Sentinel data), useful to discover spatial and temporal dependencies. Liu et al. [66] developed an algorithm to integrate L7/TM, L8/OLI, and S2/MSI time series data to map cropping intensity at large scales across seven study areas in China. Based on field-scale sample data, the annual cropping intensity maps for the study areas had accuracy rates of 89–99% and Kappa of 0.76–0.91 mapping three classes: single, double, and triple-cropping. The OA of the annual cropping intensity maps was 93%, with a Kappa of 0.84. Onojeghuo et al. [220] applied co-registered multi-temporal S1 data and L8/OLI-derived NDVI data to produce 10 m spatial resolution maps of paddy rice fields across the Sanjiang plain, China, with limited ground data. The accuracies were 95.2% (OA) and 96.7% distinguishing paddy rice fields from non-rice fields, vegetated areas, and built areas. Despite these results, the effective integration between these properties is not simple [141]. The joint use of data cube architecture [136] is a promising strategy for overcoming the big volume of remote sensing data generated by processing integrated sources. Other alternatives are cloud computing services, such as AWS and GEE, already consolidated for analysis of large terrestrial surface orbital data sets. Weiss et al. [50] explained that the open data policy for remote sensing facilitated the exploitation of dense data from different sources. Such emerging technology also allows for a change in thinking towards crop type mapping over large areas, which can leverage new techniques like adjusting the spatial resolution requirements locally [221].

5.4. Summary of Applications

As noted, the L8/OLI and S2/MSI data have been applied in different ways, and a summary can guide new users (Figure 8).
We noted a prevalence of exploring the quality of the L8/OLI and S2/MSI data to calculate VIs, extract phenometrics, biophysical parameters, and select temporal or spatial features. Supervised classification was the principal classification approach adopted in the reviewed papers, by machine learning techniques, deep learning, and artificial neural networks. Automatic classification is growing and can be further explored. Big data analytics and multi-sensor approach expanded recently by the advent of new technologies, such as data cubes, ARD, and cloud computing, in very efficient strategies to implement data fusion, surface reflectance transformation, atmospheric correction, spatial and temporal segmentation, gap filling, and cloud masking algorithms. Jointly with integration methods, this enables multi-temporal and dense time series analysis, improving data mining, and different clustering and compositing techniques. These connections can guide new users to the best practices to apply the L8/OLI and S2/MSI data in future studies.

6. Conclusions

L8/OLI and S2/MSI data have a vast recent application that demonstrates their potential to overcome the challenges concerning the identification of LULC classes in heterogeneous ecological gradients from forests to agricultural areas. The principal gap becomes a representative sample dataset. Classification models tend to achieve higher accuracies when using representative samples, while the use of RF, SVM, and neural network algorithms have allowed detecting LULC classes, with no significant difference among them, besides the fact that deep learning methods are also growing. Moreover, spectral band combinations, exploring the electromagnetic spectrum, as highlighted by the use of SWIR and Red-edge bands, increase the classification performance of a wide range of LULC classes as well as the retrieval of biophysical parameters.
Less frequently applied VIs have been useful to distinguish more vegetation classes from a multi-temporal and multi-sensor analysis perspective [222], especially in areas with a high diversity of classes with similar spectral characteristics. However, its important to revisit previous studies to understand how they were used and to avoid confusions before choosing a VI correspondent with the research objectives.
Despite our focus in L8/OLI bands, we noted less application of thermal bands of the L8/Thermal InfraRed Sensor (TIRS) for LULC classification and LULCC detection. This scarceness of thermal bands application also was detected and discussed by Weiss et al. [50] in a meta-review article about remote sensing for agricultural applications. The authors explain that this occurs since the signal is much more variable in time according to the plant stand microclimate, and the calibration of the TIR sensor in field conditions is also more complex and delicate as compared to the solar spectral domain. The application of thermal bands is more useful using UAV and in-field cameras, as pointed out by García-Berná et al. [223].
The results presented in this systematic review can guide the scientific community to the use of L8/OLI and S2/MSI data for LULC and LULCC applications. In addition, the prognosis on the continuity of Landsat and Sentinel series, expanding the use of the electromagnetic spectrum and incorporating SAR data, is promising and will ensure the availability of data necessary for the remote sensing community interested in LULC and LULCC applications, creating a robust and dense virtual constellation to overcome the limitations of traditional approaches, and enabling more detailed and consistent knowledge of the landscape.

Supplementary Materials

The following are available online at https://0-www-mdpi-com.brum.beds.ac.uk/2072-4292/12/18/3062/s1, Table S1: Conventional vegetation indices cited in the systematic review, Table S2: Less frequent vegetation indices cited in the systematic review as input for LULC mapping, Table S3: Less frequent vegetation indices that can be further explored as input for LULC mapping.

Author Contributions

According to the CRediT roles: M.E.D.C.: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Validation, Writing—original draft preparation. M.C.A.P.: Conceptualization, Formal analysis, Validation, Writing—review and editing. I.D.S.: Conceptualization, Formal analysis, Funding acquisition, Methodology, Supervision, Validation, Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior-Brasil (CAPES)-Finance Code 001, Process Number 88887.351470/2019-00 (M.E.D.C.), and the Environmental Monitoring of Brazilian Biomes project (Brazil Data Cube), funded by the Amazon Fund through the financial collaboration of the Brazilian Development Bank (BNDES) and the Foundation for Science, Technology, and Space Applications (FUNCATE), Process Number 17.2.0536.1 (M.C.A.P.).

Acknowledgments

The authors would like to acknowledge the National Institute for Space Research (INPE), the Coordination for the Improvement of Higher Education Personnel (CAPES), and the Brazil Data Cube Project for the support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gómez, C.; White, J.C.; Wulder, M.A. Optical remotely sensed time series data for land cover classification: A review. ISPRS J. Photogramm. Remote Sens. 2016, 116, 55–72. [Google Scholar] [CrossRef] [Green Version]
  2. Lambin, E.; Rounsevell, M.; Geist, H. Are agricultural land-use models able to predict changes in land-use intensity? Agric. Ecosyst. Environ. 2000, 82, 321–331. [Google Scholar] [CrossRef]
  3. Steinhausen, M.J.; Wagner, P.D.; Narasimhan, B.; Waske, B. Combining Sentinel-1 and Sentinel-2 data for improved land use and land cover mapping of monsoon regions. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 595–604. [Google Scholar] [CrossRef]
  4. Becker-Reshef, I.; Justice, C.; Barker, B.; Humber, M.; Rembold, F.; Bonifacio, R.; Zappacosta, M.; Budde, M.; Magadzire, T.; Shitote, C.; et al. Strengthening agricultural decisions in countries at risk of food insecurity: The GEOGLAM Crop Monitor for Early Warning. Remote Sens. Environ. 2020, 237, 111553. [Google Scholar] [CrossRef]
  5. Xu, Y.; Yu, L.; Zhao, F.R.; Cai, X.; Zhao, J.; Lu, H.; Gong, P. Tracking annual cropland changes from 1984 to 2016 using time-series Landsat images with a change-detection and post-classification approach: Experiments from three sites in Africa. Remote Sens. Environ. 2018, 218, 13–31. [Google Scholar] [CrossRef]
  6. Feng, Q.; Gong, J.; Liu, J.; Li, Y. Monitoring Cropland Dynamics of the Yellow River Delta based on Multi-Temporal Landsat Imagery over 1986 to 2015. Sustainability 2015, 7, 14834–14858. [Google Scholar] [CrossRef] [Green Version]
  7. Szantoi, Z.; Geller, G.N.; Tsendbazar, N.E.; See, L.; Griffiths, P.; Fritz, S.; Gong, P.; Herold, M.; Mora, B.; Obregón, A. Addressing the need for improved land cover map products for policy support. Environ. Sci. Policy 2020, 112, 28–35. [Google Scholar] [CrossRef]
  8. Whitcraft, A.K.; Becker-Reshef, I.; Justice, C.O.; Gifford, L.; Kavvada, A.; Jarvis, I. No pixel left behind: Toward integrating Earth Observations for agriculture into the United Nations Sustainable Development Goals framework. Remote Sens. Environ. 2019, 235, 111470. [Google Scholar] [CrossRef]
  9. Maus, V.; Câmara, G.; Appel, M.; Pebesma, E. dtwSat: Time-Weighted Dynamic Time Warping for Satellite Image Time Series Analysis in R. J. Stat. Softw. 2019, 88, 1–3. [Google Scholar] [CrossRef] [Green Version]
  10. Petitjean, F.; Kurtz, C.; Passat, N.; Gançarski, P. Spatio-temporal reasoning for the classification of satellite image time series. Pattern Recognit. Lett. 2012, 33, 1805–1815. [Google Scholar] [CrossRef] [Green Version]
  11. Khatami, R.; Mountrakis, G.; Stehman, S.V. A meta-analysis of remote sensing research on supervised pixel-based land-cover image classification processes: General guidelines for practitioners and future research. Remote Sens. Environ. 2016, 177, 89–100. [Google Scholar] [CrossRef] [Green Version]
  12. Bolton, D.K.; Gray, J.M.; Melaas, E.K.; Moon, M.; Eklundh, L.; Friedl, M.A. Continental-scale land surface phenology from harmonized Landsat 8 and Sentinel-2 imagery. Remote Sens. Environ. 2020, 240, 111685. [Google Scholar] [CrossRef]
  13. Wang, L.; Qi, F.; Shen, X.; Huang, J. Monitoring Multiple Cropping Index of Henan Province, China Based on MODIS-EVI Time Series Data and Savitzky-Golay Filtering Algorithm. Comput. Model. Eng. Sci. 2019, 119, 331–348. [Google Scholar] [CrossRef] [Green Version]
  14. Chaves, M.; Alves, M.; Oliveira, M.; Sáfadi, T. A Geostatistical Approach for Modeling Soybean Crop Area and Yield Based on Census and Remote Sensing Data. Remote Sens. 2018, 10, 680. [Google Scholar] [CrossRef] [Green Version]
  15. Picoli, M.C.A.; Camara, G.; Sanches, I.; Simões, R.; Carvalho, A.; Maciel, A.; Coutinho, A.; Esquerdo, J.; Antunes, J.; Begotti, R.A.; et al. Big earth observation time series analysis for monitoring Brazilian agriculture. ISPRS J. Photogramm. Remote Sens. 2018, 145, 1–12. [Google Scholar] [CrossRef]
  16. Simoes, R.; Picoli, M.C.A.; Camara, G.; Maciel, A.; Santos, L.; Andrade, P.R.; Sánchez, A.; Ferreira, K.; Carvalho, A. Land use and cover maps for Mato Grosso State in Brazil from 2001 to 2017. Sci. Data 2020, 7, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Chen, J.; Chen, J.; Liao, A.; Cao, X.; Chen, L.; Chen, X.; He, C.; Han, G.; Peng, S.; Lu, M.; et al. Global land cover mapping at 30 m resolution: A POK-based operational approach. ISPRS J. Photogramm. Remote Sens. 2015, 103, 7–27. [Google Scholar] [CrossRef] [Green Version]
  18. Huang, C.; Zhang, C.; He, Y.; Liu, Q.; Li, H.; Su, F.; Liu, G.; Bridhikitti, A. Land Cover Mapping in Cloud-Prone Tropical Areas Using Sentinel-2 Data: Integrating Spectral Features with Ndvi Temporal Dynamics. Remote Sens. 2020, 12, 1163. [Google Scholar] [CrossRef] [Green Version]
  19. Bégué, A.; Arvor, D.; Bellon, B.; Betbeder, J.; de Abelleyra, D.; Ferraz, R.P.; Lebourgeois, V.; Lelong, C.; Simões, M.; Verón, S.R. Remote sensing and cropping practices: A review. Remote Sens. 2018, 10, 99. [Google Scholar] [CrossRef] [Green Version]
  20. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  21. Wang, Q.; Blackburn, G.A.; Onojeghuo, A.O.; Dash, J.; Zhou, L.; Zhang, Y.; Atkinson, P.M. Fusion of Landsat 8 OLI and Sentinel-2 MSI Data. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3885–3899. [Google Scholar] [CrossRef] [Green Version]
  22. Claverie, M.; Ju, J.; Masek, J.G.; Dungan, J.L.; Vermote, E.F.; Roger, J.C.; Skakun, S.V.; Justice, C. The Harmonized Landsat and Sentinel-2 surface reflectance data set. Remote Sens. Environ. 2018, 219, 145–161. [Google Scholar] [CrossRef]
  23. Wolanin, A.; Camps-valls, G.; Gómez-chova, L.; Mateo-garcía, G.; Tol, C.V.D.; Zhang, Y.; Guanter, L. Estimating crop primary productivity with Sentinel-2 and Landsat 8 using machine learning methods trained with radiative transfer simulations. Remote Sens. Environ. 2019, 225, 441–457. [Google Scholar] [CrossRef]
  24. Inglada, J.; Vincent, A.; Arias, M.; Marais-Sicre, C. Improved early crop type identification by joint use of high temporal resolution sar and optical image time series. Remote Sens. 2016, 8, 362. [Google Scholar] [CrossRef] [Green Version]
  25. Woodcock, C.E.; Loveland, T.R.; Herold, M. Preface: Time series analysis imagery special issue. Remote Sens. Environ. 2020, 238, 111613. [Google Scholar] [CrossRef]
  26. Ma, L.; Li, M.; Ma, X.; Cheng, L.; Du, P.; Liu, Y. A review of supervised object-based land-cover image classification. ISPRS J. Photogramm. Remote Sens. 2017, 130, 277–293. [Google Scholar] [CrossRef]
  27. Chirici, G.; Mura, M.; McInerney, D.; Py, N.; Tomppo, E.O.; Waser, L.T.; Travaglini, D.; McRoberts, R.E. A meta-analysis and review of the literature on the k-Nearest Neighbors technique for forestry applications that use remotely sensed data. Remote Sens. Environ. 2016, 176, 282–294. [Google Scholar] [CrossRef]
  28. Zeng, L.; Wardlow, B.D.; Xiang, D.; Hu, S.; Li, D. A review of vegetation phenological metrics extraction using time-series, multispectral satellite data. Remote Sens. Environ. 2020, 237, 111511. [Google Scholar] [CrossRef]
  29. Wulder, M.A.; Hilker, T.; White, J.C.; Coops, N.C.; Masek, J.G.; Pflugmacher, D.; Crevier, Y. Virtual constellations for global terrestrial monitoring. Remote Sens. Environ. 2015, 170, 62–76. [Google Scholar] [CrossRef] [Green Version]
  30. Piedelobo, L.; Hernández-López, D.; Ballesteros, R.; Chakhar, A.; Del Pozo, S.; González-Aguilera, D.; Moreno, M.A. Scalable pixel-based crop classification combining Sentinel-2 and Landsat-8 data time series: Case study of the Duero river basin. Agric. Syst. 2019, 171, 36–50. [Google Scholar] [CrossRef]
  31. European Space Agency. Sentinel-2: Resolution and Swath, 2020. Available online: https://sentinel.esa.int/web/sentinel/missions/sentinel-2/instrument-payload/resolution-and-swath (accessed on 18 August 2020).
  32. National Aeronautics and Space Administration. Spectral Response of the Operational Land Imager In-Band, Band-Average Relative Spectral Response, 2020. Available online: https://landsat.gsfc.nasa.gov/preliminary-spectral-response-of-the-operational-land-imager-in-band-band-average-relative-spectral-response/ (accessed on 18 August 2020).
  33. Defourny, P.; Bontemps, S.; Bellemans, N.; Cara, C.; Dedieu, G.; Guzzonato, E.; Hagolle, O.; Inglada, J.; Nicola, L.; Rabaute, T.; et al. Near real-time agriculture monitoring at national scale at parcel resolution: Performance assessment of the Sen2-Agri automated system in various cropping systems around the world. Remote Sens. Environ. 2019, 221, 551–568. [Google Scholar] [CrossRef]
  34. Maponya, M.G.; van Niekerk, A.; Mashimbye, Z.E. Pre-harvest classification of crop types using a Sentinel-2 time-series and machine learning. Comput. Electron. Agric. 2020, 169, 105164. [Google Scholar] [CrossRef]
  35. Xie, Q.; Dash, J.; Huete, A.; Jiang, A.; Yin, G.; Ding, Y.; Peng, D.; Hall, C.C.; Brown, L.; Shi, Y.; et al. Retrieva of crop biophysical parameters from Sentinel-2 remote sensing imagery. Int. J. Appl. Earth Obs. Geoinf. 2019, 80, 187–195. [Google Scholar] [CrossRef]
  36. Sánchez-Espinosa, A.; Schröder, C. Land use and land cover mapping in wetlands one step closer to the ground: Sentinel-2 versus Landsat 8. J. Environ. Manag. 2019, 247, 484–498. [Google Scholar] [CrossRef] [PubMed]
  37. Xie, Q.; Dash, J.; Huang, W.; Peng, D.; Qin, Q.; Mortimer, H.; Casa, R.; Pignatti, S.; Laneve, G.; Pascucci, S.; et al. Vegetation Indices Combining the Red and Red-Edge Spectral Information for Leaf Area Index Retrieval. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1482–1492. [Google Scholar] [CrossRef] [Green Version]
  38. Fernández-Manso, A.; Fernández-Manso, O.; Quintano, C. SENTINEL-2A red-edge spectral indices suitability for discriminating burn severity. Int. J. Appl. Earth Obs. Geoinf. 2016, 50, 170–175. [Google Scholar] [CrossRef]
  39. Munyati, C.; Balzter, H.; Economon, E. Correlating Sentinel-2 MSI-derived vegetation indices with in-situ reflectance and tissue macronutrients in savannah grass. Int. J. Remote Sens. 2020, 41, 3820–3844. [Google Scholar] [CrossRef]
  40. Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K.I. Crop classification from Sentinel-2-derived vegetation indices using ensemble learning. J. Appl. Remote Sens. 2018, 12, 1. [Google Scholar] [CrossRef] [Green Version]
  41. Li, J.; Roy, D.P. A Global Analysis of Sentinel-2A, Sentinel-2B and Landsat-8 Data Revisit Intervals and Implications for Terrestrial Monitoring. Remote Sens. 2017, 9, 902. [Google Scholar] [CrossRef] [Green Version]
  42. Skakun, S.; Vermote, E.; Roger, J.C.; Franch, B. Combined Use of Landsat-8 and Sentinel-2A Images for Winter Crop Mapping and Winter Wheat Yield Assessment at Regional Scale. AIMS Geosci. 2017, 3, 163–186. [Google Scholar] [CrossRef]
  43. Wulder, M.A.; Loveland, T.R.; Roy, D.P.; Crawford, C.J.; Masek, J.G.; Woodcock, C.E.; Allen, R.G.; Anderson, M.C.; Belward, A.S.; Cohen, W.B.; et al. Current status of Landsat program, science, and applications. Remote Sens. Environ. 2019, 225, 127–147. [Google Scholar] [CrossRef]
  44. See, L.; Fritz, S.; You, L.; Ramankutty, N.; Herrero, M.; Justice, C.; Becker-Reshef, I.; Thornton, P.; Erb, K.; Gong, P.; et al. Improved global cropland data as an essential ingredient for food security. Glob. Food Secur. 2015, 4, 37–45. [Google Scholar] [CrossRef]
  45. Belgiu, M.; Drăgut, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  46. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef] [Green Version]
  47. Wulder, M.A.; Coops, N.C.; Roy, D.P.; White, J.C.; Hermosilla, T. Land cover 2.0. Int. J. Remote Sens. 2018, 39, 4254–4284. [Google Scholar] [CrossRef] [Green Version]
  48. Pandey, P.C.; Koutsias, N.; Petropoulos, G.P.; Srivastava, P.K.; Dor, E.B. Land use/land cover in view of earth observation: Data sources, input dimensions, and classifiers—a review of the state of the art. Geocarto Int. 2019, 1–32. [Google Scholar] [CrossRef]
  49. Sudmanns, M.; Tiede, D.; Lang, S.; Bergstedt, H.; Trost, G.; Augustin, H.; Baraldi, A.; Blaschke, T. Big Earth data: Disruptive changes in Earth observation data management and analysis? Int. J. Digital Earth 2019, 1–19. [Google Scholar] [CrossRef]
  50. Weiss, M.; Jacob, F.; Duveiller, G. Remote sensing for agricultural applications: A meta-review. Remote Sens. Environ. 2020, 236, 111402. [Google Scholar] [CrossRef]
  51. Yu, M.; Bambacus, M.; Cervone, G.; Clarke, K.; Duffy, D.; Huang, Q.; Li, J.; Li, W.; Li, Z.; Liu, Q.; et al. Spatiotemporal event detection: A review. Int. J. Digital Earth 2020, 1–27. [Google Scholar] [CrossRef] [Green Version]
  52. Phiri, D.; Simwanda, M.; Salekin, S.; Nyirenda, V.R.; Murayama, Y.; Ranagalage, M. Sentinel-2 Data for Land Cover/Use Mapping: A Review. Remote Sens. 2020, 12, 2291. [Google Scholar] [CrossRef]
  53. Moher, D.; Liberati, A.; Tetzlaff, J.; and, D.G.A. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Mengist, W.; Soromessa, T.; Legese, G. Method for conducting systematic literature review and meta-analysis for environmental science research. MethodsX 2020, 7, 100777. [Google Scholar] [CrossRef] [PubMed]
  55. Bajocco, S.; Raparelli, E.; Teofili, T.; Bascietto, M.; Ricotta, C. Text Mining in Remotely Sensed Phenology Studies: A Review on Research Development, Main Topics, and Emerging Issues. Remote Sens. 2019, 11, 2751. [Google Scholar] [CrossRef] [Green Version]
  56. Doxani, G.; Vermote, E.; Roger, J.C.; Gascon, F.; Adriaensen, S.; Frantz, D.; Hagolle, O.; Hollstein, A.; Kirches, G.; Li, F.; et al. Atmospheric Correction Inter-Comparison Exercise. Remote Sens. 2018, 10, 352. [Google Scholar] [CrossRef] [Green Version]
  57. Giuliani, G.; Chatenoux, B.; De Bono, A.; Rodila, D.; Richard, J.P.; Allenbach, K.; Dao, H.; Peduzzi, P. Building an Earth Observations Data Cube: Lessons learned from the Swiss Data Cube (SDC) on generating Analysis Ready Data (ARD). Big Earth Data 2017, 1, 100–117. [Google Scholar] [CrossRef] [Green Version]
  58. Wu, Z.; Snyder, G.; Vadnais, C.; Arora, R.; Babcock, M.; Stensaas, G.; Doucette, P.; Newman, T. User needs for future Landsat missions. Remote Sens. Environ. 2019, 231, 111214. [Google Scholar] [CrossRef]
  59. Tian, F.; Wu, B.; Zeng, H.; Zhang, X.; Xu, J. Efficient Identification of Corn Cultivation Area with Multitemporal Synthetic Aperture Radar and Optical Images in the Google Earth Engine Cloud Platform. Remote Sens. 2019, 11, 629. [Google Scholar] [CrossRef] [Green Version]
  60. Ye, S.; Pontius, R.G.; Rakshit, R. A review of accuracy assessment for object-based image analysis: From per-pixel to per-polygon approaches. ISPRS J. Photogramm. Remote Sens. 2018, 141, 137–147. [Google Scholar] [CrossRef]
  61. Csillik, O.; Belgiu, M.; Asner, G.P.; Kelly, M. Object-Based Time-Constrained Dynamic Time Warping Classification of Crops Using Sentinel-2. Remote Sens. 2019, 11, 1257. [Google Scholar] [CrossRef] [Green Version]
  62. Peña, M.A.; Liao, R.; Brenning, A. Using spectrotemporal indices to improve the fruit-tree crop classification accuracy. ISPRS J. Photogramm. Remote Sens. 2017, 128, 158–169. [Google Scholar] [CrossRef]
  63. Niazmardi, S.; Homayouni, S.; Safari, A.; McNairn, H.; Shang, J.; Beckett, K. Histogram-based spatio-temporal feature classification of vegetation indices time-series for crop mapping. Int. J. Appl. Earth Obs. Geoinf. 2018, 72, 34–41. [Google Scholar] [CrossRef]
  64. Puletti, N.; Chianucci, F.; Castaldi, C.; Puletti, N.; Chianucci, F.; Castaldi, C. Use of Sentinel-2 for forest classification in Mediterranean environments. Ann. Silvic. Res. 2017, 42, 1–7. [Google Scholar] [CrossRef]
  65. Belgiu, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar] [CrossRef]
  66. Liu, L.; Xiao, X.; Qin, Y.; Wang, J.; Xu, X.; Hu, Y.; Qiao, Z. Mapping cropping intensity in China using time series Landsat and Sentinel-2 images and Google Earth Engine. Remote Sens. Environ. 2020, 239, 111624. [Google Scholar] [CrossRef]
  67. Xiong, J.; Thenkabail, P.S.; Tilton, J.C.; Gumma, M.K.; Teluguntla, P.; Oliphant, A.; Congalton, R.G.; Yadav, K.; Gorelick, N. Nominal 30-m cropland extent map of continental Africa by integrating pixel-based and object-based algorithms using Sentinel-2 and Landsat-8 data on Google Earth Engine. Remote Sens. 2017, 9, 1065. [Google Scholar] [CrossRef] [Green Version]
  68. Bendini, H.N.; Fonseca, L.M.G.; Schwieder, M.; Körting, T.S.; Rufin, P.; Sanches, I.D.; Leitão, P.J.; Hostert, P. Detailed agricultural land classification in the Brazilian cerrado based on phenological information from dense satellite image time series. Int. J. Appl. Earth Obs. Geoinf. 2019, 82, 101872. [Google Scholar] [CrossRef]
  69. Sharma, A.; Liu, X.; Yang, X.; Shi, D. A patch-based convolutional neural network for remote sensing image classification. Neural Netw. 2017, 95, 19–28. [Google Scholar] [CrossRef]
  70. Nasrallah, A.; Baghdadi, N.; Mhawej, M.; Faour, G.; Darwish, T.; Belhouchette, H.; Darwich, S. A Novel Approach for Mapping Wheat Areas Using High Resolution Sentinel-2 Images. Sensors 2018, 18, 2089. [Google Scholar] [CrossRef] [Green Version]
  71. Pelletier, C.; Valero, S.; Inglada, J.; Champion, N.; Dedieu, G. Assessing the robustness of Random Forests to map land cover with high resolution satellite image time series over large areas. Remote Sens. Environ. 2016, 187, 156–168. [Google Scholar] [CrossRef]
  72. Teluguntla, P.; Thenkabail, P.; Oliphant, A.; Xiong, J.; Gumma, M.K.; Congalton, R.G.; Yadav, K.; Huete, A. A 30-m landsat-derived cropland extent product of Australia and China using random forest machine learning algorithm on Google Earth Engine cloud computing platform. ISPRS J. Photogramm. Remote Sens. 2018, 144, 325–340. [Google Scholar] [CrossRef]
  73. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
  74. Toure, S.I.; Stow, D.A.; chien Shih, H.; Weeks, J.; Lopez-Carr, D. Land cover and land use change analysis using multi-spatial resolution data and object-based image analysis. Remote Sens. Environ. 2018, 210, 259–268. [Google Scholar] [CrossRef]
  75. Guttler, F.; Ienco, D.; Nin, J.; Teisseire, M.; Poncelet, P. A graph-based approach to detect spatiotemporal dynamics in satellite image time series. ISPRS J. Photogramm. Remote Sens. 2017, 130, 92–107. [Google Scholar] [CrossRef] [Green Version]
  76. Goodin, D.G.; Anibas, K.L.; Bezymennyi, M. Mapping land cover and land use from object-based classification: An example from a complex agricultural landscape. Int. J. Remote Sens. 2015, 36, 4702–4723. [Google Scholar] [CrossRef]
  77. Yan, L.; Roy, D.P. Conterminous United States crop field size quantification from multi-temporal Landsat data. Remote Sens. Environ. 2016, 172, 67–86. [Google Scholar] [CrossRef] [Green Version]
  78. Khiali, L.; Ienco, D.; Teisseire, M. Object-oriented satellite image time series analysis using a graph-based representation. Ecol. Informatics 2018, 211, 52–64. [Google Scholar] [CrossRef] [Green Version]
  79. Watkins, B.; van Niekerk, A. A comparison of object-based image analysis approaches for field boundary delineation using multi-temporal Sentinel-2 imagery. Comput. Electron. Agric. 2019, 158, 294–302. [Google Scholar] [CrossRef]
  80. Graesser, J.; Ramankutty, N. Detection of cropland field parcels from Landsat imagery. Remote Sens. Environ. 2017, 201, 165–180. [Google Scholar] [CrossRef] [Green Version]
  81. Yin, H.; Pflugmacher, D.; Li, A.; Li, Z.; Hostert, P. Land use and land cover change in Inner Mongolia—Understanding the effects of China’s re-vegetation programs. Remote Sens. Environ. 2018, 204, 918–930. [Google Scholar] [CrossRef]
  82. Shang, M.; Wang, S.X.; Zhou, Y.; Du, C. Effects of Training Samples and Classifiers on Classification of Landsat-8 Imagery. J. Indian Soc. Remote Sens. 2018, 46, 1333–1340. [Google Scholar] [CrossRef]
  83. Clark, M.L. Comparison of multi-seasonal Landsat 8, Sentinel-2 and hyperspectral images for mapping forest alliances in Northern California. ISPRS J. Photogramm. Remote Sens. 2020, 159, 26–40. [Google Scholar] [CrossRef]
  84. Meyer, L.H.; Heurich, M.; Beudert, B.; Premier, J.; Pflugmacher, D. Comparison of Landsat-8 and Sentinel-2 Data for Estimation of Leaf Area Index in Temperate Forests. Remote Sens. 2019, 11, 1160. [Google Scholar] [CrossRef] [Green Version]
  85. Sun, L.; Chen, J.; Guo, S.; Deng, X.; Han, Y. Integration of Time Series Sentinel-1 and Sentinel-2 Imagery for Crop Type Mapping over Oasis Agricultural Areas. Remote Sens. 2020, 12, 158. [Google Scholar] [CrossRef] [Green Version]
  86. Corti, M.; Cavalli, D.; Cabassi, G.; Marino Gallina, P.; Bechini, L. Does remote and proximal optical sensing successfully estimate maize variables? A review. Eur. J. Agron. 2018, 99, 37–50. [Google Scholar] [CrossRef]
  87. Suess, S.; van der Linden, S.; Okujeni, A.; Griffiths, P.; Leitão, P.J.; Schwieder, M.; Hostert, P. Characterizing 32 years of shrub cover dynamics in southern Portugal using annual Landsat composites and machine learning regression modeling. Remote Sens. Environ. 2018, 219, 353–364. [Google Scholar] [CrossRef]
  88. Rouse, R.W.H.; Haas, J.A.W.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with ERTS. In Third Earth Resources Technology Satellite-1 Symposium—Volume I: Technical Presentations; NASA SP-351; NASA: Washington, DC, USA, 1974; pp. 309–317. [Google Scholar]
  89. Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the radiometric and biophysical performance of the MODIS vegetation indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
  90. Huete, A. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  91. Schultz, M.; Clevers, J.G.; Carter, S.; Verbesselt, J.; Avitabile, V.; Quang, H.V.; Herold, M. Performance of vegetation indices from Landsat time series in deforestation monitoring. Int. J. Appl. Earth Obs. Geoinf. 2016, 52, 318–327. [Google Scholar] [CrossRef]
  92. Gao, B.C. NDWI A Normalized Difference Water Index for Remote Sensing of Vegetation Liquid Water From Space. Remote Sens. Environ. 1996, 266, 257–266. [Google Scholar] [CrossRef]
  93. Ferrant, S.; Selles, A.; Le Page, M.; Herrault, P.A.; Pelletier, C.; Al-Bitar, A.; Mermoz, S.; Gascoin, S.; Bouvet, A.; Saqalli, M.; et al. Detection of irrigated crops from Sentinel-1 and Sentinel-2 data to estimate seasonal groundwater use in South India. Remote Sens. 2017, 9, 1119. [Google Scholar] [CrossRef] [Green Version]
  94. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a green channel in remote sensing of global vegetation from EOS- MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  95. Frampton, W.J.; Dash, J.; Watmough, G.; Milton, E.J. Evaluating the capabilities of Sentinel-2 for quantitative estimation of biophysical variables in vegetation. ISPRS J. Photogramm. Remote Sens. 2013, 82, 83–92. [Google Scholar] [CrossRef] [Green Version]
  96. Qi, J.; Chehbouni, A.; Huete, A.R.; Kerr, Y.H.; Sorooshian, S. A modified soil adjusted vegetation index. Remote Sens. Environ. 1994, 48, 119–126. [Google Scholar] [CrossRef]
  97. Rondeaux, G.; Steven, M.; Baret, F. Optimization of soil-adjusted vegetation indices. Remote Sens. Environ. 1996, 55, 95–107. [Google Scholar] [CrossRef]
  98. Jiang, Z.; Huete, A.; Didan, K.; Miura, T. Development of a two-band enhanced vegetation index without a blue band. Remote Sens. Environ. 2008, 112, 3833–3845. [Google Scholar] [CrossRef]
  99. McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  100. Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  101. Parente, L.; Ferreira, L.; Faria, A.; Nogueira, S.; Araújo, F.; Teixeira, L.; Hagen, S. Monitoring the brazilian pasturelands: A new mapping approach based on the landsat 8 spectral and temporal domains. Int. J. Appl. Earth Obs. Geoinf. 2017, 62, 135–143. [Google Scholar] [CrossRef]
  102. Palchowdhuri, Y.; Valcarce-Diñeiro, R.; King, P.; Sanabria-Soto, M. Classification of multi-temporal spectral indices for crop type mapping: A case study in Coalville, UK. J. Agric. Sci. 2018, 156, 24–36. [Google Scholar] [CrossRef]
  103. Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K.I. Mapping crop cover using multi-temporal Landsat 8 OLI imagery. Int. J. Remote Sens. 2017, 38, 4348–4361. [Google Scholar] [CrossRef] [Green Version]
  104. Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
  105. Pastick, N.J.; Wylie, B.K.; Wu, Z. Spatiotemporal analysis of Landsat-8 and Sentinel-2 data to support monitoring of dryland ecosystems. Remote Sens. 2018, 10, 791. [Google Scholar] [CrossRef] [Green Version]
  106. Franch, B.; Vermote, E.; Skakun, S.; Roger, J.C.; Masek, J.; Ju, J.; Villaescusa-Nadal, J.; Santamaria-Artigas, A. A Method for Landsat and Sentinel 2 (HLS) BRDF Normalization. Remote Sens. 2019, 11, 632. [Google Scholar] [CrossRef] [Green Version]
  107. Griffiths, P.; Nendel, C.; Hostert, P. Intra-annual reflectance composites from Sentinel-2 and Landsat for national-scale crop and land cover mapping. Remote Sens. Environ. 2019, 220, 135–151. [Google Scholar] [CrossRef]
  108. Roy, D.P.; Yan, L. Robust Landsat-based crop time series modelling. Remote Sens. Environ. 2018, 238, 110810. [Google Scholar] [CrossRef]
  109. Schwieder, M.; Leitão, P.J.; da Cunha Bustamante, M.M.; Ferreira, L.G.; Rabe, A.; Hostert, P. Mapping Brazilian savanna vegetation gradients with Landsat time series. Int. J. Appl. Earth Obs. Geoinf. 2016, 52, 361–370. [Google Scholar] [CrossRef]
  110. Roy, D.P.; Li, J.; Zhang, H.K.; Yan, L.; Huang, H.; Li, Z. Examination of Sentinel-2A multi-spectral instrument (MSI) reflectance anisotropy and the suitability of a general method to normalize MSI reflectance to nadir BRDF adjusted reflectance. Remote Sens. Environ. 2017, 199, 25–38. [Google Scholar] [CrossRef]
  111. Flood, N. Comparing Sentinel-2A and Landsat 7 and 8 Using Surface Reflectance over Australia. Remote Sens. 2017, 9, 659. [Google Scholar] [CrossRef] [Green Version]
  112. Griffiths, P.; Nendel, C.; Pickert, J.; Hostert, P. Towards national-scale characterization of grassland use intensity from integrated Sentinel-2 and Landsat time series. Remote Sens. Environ. 2019, 238, 111124. [Google Scholar] [CrossRef]
  113. Zhou, Q.; Rover, J.; Brown, J.; Worstell, B.; Howard, D.; Wu, Z.; Gallant, A.L.; Rundquist, B.; Burke, M. Monitoring landscape dynamics in central U.S. grasslands with harmonized Landsat-8 and Sentinel-2 time series data. Remote Sens. 2019, 11, 328. [Google Scholar] [CrossRef] [Green Version]
  114. Torbick, N.; Huang, X.; Ziniti, B.; Johnson, D.; Masek, J.; Reba, M. Fusion of Moderate Resolution Earth Observations for Operational Crop Type Mapping. Remote Sens. 2018, 10, 1058. [Google Scholar] [CrossRef] [Green Version]
  115. Hao, P.; Tang, H.; Chen, Z.; Yu, L.; Wu, M.q. High resolution crop intensity mapping using harmonized Landsat-8 and Sentinel-2 data. J. Integr. Agric. 2019, 18, 2883–2897. [Google Scholar] [CrossRef]
  116. Frantz, D. FORCE—Landsat + Sentinel-2 Analysis Ready Data and Beyond. Remote Sens. 2019, 11, 1124. [Google Scholar] [CrossRef] [Green Version]
  117. Saunier, S.; Louis, J.; Debaecker, V.; Beaton, T.; Cadau, E.G.; Boccia, V.; Gascon, F. Sen2like, A Tool To Generate Sentinel-2 Harmonised Surface Reflectance Products—First, Results with Landsat-8. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 5650–5653. [Google Scholar] [CrossRef]
  118. Bontemps, S.; Arias, M.; Cara, C.; Dedieu, G.; Guzzonato, E.; Hagolle, O.; Inglada, J.; Matton, N.; Morin, D.; Popescu, R.; et al. Building a data set over 12 globally distributed sites to support the development of agriculture monitoring applications with Sentinel-2. Remote Sens. 2015, 7, 16062–16090. [Google Scholar] [CrossRef] [Green Version]
  119. Whitcraft, A.K.; Becker-Reshef, I.; Justice, C.O. A framework for defining spatially explicit earth observation requirements for a global agricultural monitoring initiative (GEOGLAM). Remote Sens. 2015, 7, 1461–1481. [Google Scholar] [CrossRef] [Green Version]
  120. Tian, H.; Huang, N.; Niu, Z.; Qin, Y.; Pei, J.; Wang, J. Mapping Winter Crops in China with Multi-Source Satellite Imagery and Phenology-Based Algorithm. Remote Sens. 2019, 11, 820. [Google Scholar] [CrossRef] [Green Version]
  121. Feng, Q.; Yang, J.; Zhu, D.; Liu, J.; Guo, H.; Bayartungalag, B.; Li, B. Integrating Multitemporal Sentinel-1/2 Data for Coastal Land Cover Classification Using a Multibranch Convolutional Neural Network: A Case of the Yellow River Delta. Remote Sens. 2019, 11, 1006. [Google Scholar] [CrossRef] [Green Version]
  122. Frantz, D.; Haß, E.; Uhl, A.; Stoffels, J.; Hill, J. Improvement of the Fmask algorithm for Sentinel-2 images: Separating clouds from bright surfaces based on parallax effects. Remote Sens. Environ. 2018, 215, 471–481. [Google Scholar] [CrossRef]
  123. Zhang, H.K.; Roy, D.P.; Yan, L.; Li, Z.; Huang, H.; Vermote, E.; Skakun, S.; Roger, J.C. Characterization of Sentinel-2A and Landsat-8 top of atmosphere, surface, and nadir BRDF adjusted reflectance and NDVI differences. Remote Sens. Environ. 2018, 215, 482–494. [Google Scholar] [CrossRef]
  124. Heydari, S.S.; Mountrakis, G. Effect of classifier selection, reference sample size, reference class distribution and scene heterogeneity in per-pixel classification accuracy using 26 Landsat sites. Remote Sens. Environ. 2018, 204, 648–658. [Google Scholar] [CrossRef]
  125. Viana, C.M.; Girão, I.; Rocha, J. Long-Term Satellite Image Time-Series for Land Use/Land Cover Change Detection Using Refined Open Source Data in a Rural Region. Remote Sens. 2019, 11, 1104. [Google Scholar] [CrossRef] [Green Version]
  126. Noi, P.T.; Kappas, M. Comparison of Random Forest, k-Nearest Neighbor, and Support Vector Machine Classifiers for Land Cover Classification Using Sentinel-2 Imagery. Sensors 2018, 18, 18. [Google Scholar] [CrossRef] [Green Version]
  127. Griffiths, P.; Jakimow, B.; Hostert, P. Reconstructing long term annual deforestation dynamics in Pará and Mato Grosso using the Landsat archive. Remote Sens. Environ. 2018, 216, 497–513. [Google Scholar] [CrossRef]
  128. Waldner, F.; De Abelleyra, D.; Verón, S.R.; Zhang, M.; Wu, B.; Plotnikov, D.; Bartalev, S.; Lavreniuk, M.; Skakun, S.; Kussul, N.; et al. Towards a set of agrosystem-specific cropland mapping methods to address the global cropland diversity. Int. J. Remote Sens. 2016, 37, 3196–3231. [Google Scholar] [CrossRef] [Green Version]
  129. Leinenkugel, P.; Deck, R.; Huth, J.; Ottinger, M.; Mack, B. The Potential of Open Geodata for Automated Large-Scale Land Use and Land Cover Classification. Remote Sens. 2019, 11, 2249. [Google Scholar] [CrossRef] [Green Version]
  130. Sanches, I.D.; Feitosa, R.Q.; Achanccaray, P.; Montibeller, B.; Luiz, A.J.; Soares, M.D.; Prudente, V.H.; Vieira, D.C.; Maurano, L.E. LEM benchmark database for tropical agricultural remote sensing application. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2018, 42, 387–392. [Google Scholar] [CrossRef] [Green Version]
  131. Sun, C.; Bian, Y.; Zhou, T.; Pan, J. Using of Multi-Source and Multi-Temporal Remote Sensing Data Improves Crop-Type Mapping in the Subtropical Agriculture Region. Sensors 2019, 19, 2401. [Google Scholar] [CrossRef] [Green Version]
  132. Fowler, J.; Waldner, F.; Hochman, Z. All pixels are useful, but some are more useful: Efficient Situ Data Collect. Crop-Type Mapp. Using Seq. Explor. Methods. Int. J. Appl. Earth Obs. Geoinf. 2020, 91, 102114. [Google Scholar] [CrossRef]
  133. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
  134. Mack, B.; Leinenkugel, P.; Kuenzer, C.; Dech, S. A semi-automated approach for the generation of a new land use and land cover product for Germany based on Landsat time-series and Lucas -Situ Data. Remote Sens. Lett. 2016, 8, 244–253. [Google Scholar] [CrossRef]
  135. Zhang, H.K.; Roy, D.P. Using the 500 m MODIS land cover product to derive a consistent continental scale 30 m Landsat land cover classification. Remote Sens. Environ. 2017, 197, 15–34. [Google Scholar] [CrossRef]
  136. Giuliani, G.; Camara, G.; Killough, B.; Minchin, S. Earth Observation Open Science: Enhancing Reproducible Science Using Data Cubes. Data 2019, 4, 147. [Google Scholar] [CrossRef] [Green Version]
  137. Lewis, A.; Oliver, S.; Lymburner, L.; Evans, B.; Wyborn, L.; Mueller, N.; Raevksi, G.; Hooke, J.; Woodcock, R.; Sixsmith, J.; et al. The Australian Geoscience Data Cube—Foundations and lessons learned. Remote Sens. Environ. 2017, 202, 276–292. [Google Scholar] [CrossRef]
  138. Mercier, A.; Betbeder, J.; Rumiano, F.; Baudry, J.; Gond, V.; Blanc, L.; Bourgoin, C.; Cornu, G.; Ciudad, C.; Marchamalo, M.; et al. Evaluation of Sentinel-1 and 2 Time Series for Land Cover Classification of Forest—Agriculture Mosaics in Temperate and Tropical Landscapes. Remote Sens. 2019, 11, 979. [Google Scholar] [CrossRef] [Green Version]
  139. Inglada, J.; Arias, M.; Tardy, B.; Hagolle, O.; Valero, S.; Morin, D.; Dedieu, G.; Sepulcre, G.; Bontemps, S.; Defourny, P.; et al. Assessment of an operational system for crop type map production using high temporal and spatial resolution satellite optical imagery. Remote Sens. 2015, 7, 12356–12379. [Google Scholar] [CrossRef] [Green Version]
  140. Scheffler, D.; Frantz, D.; Segl, K. Spectral harmonization and red edge prediction of Landsat-8 to Sentinel-2 using land cover optimized multivariate regressors. Remote Sens. Environ. 2020, 241, 111723. [Google Scholar] [CrossRef]
  141. Ienco, D.; Interdonato, R.; Gaetano, R.; Minh, D.H.T. Combining Sentinel-1 and Sentinel-2 Satellite Image Time Series for land cover mapping via a multi-source deep learning architecture. ISPRS J. Photogramm. Remote Sens. 2019, 158, 11–22. [Google Scholar] [CrossRef]
  142. Roy, D.P.; Kovalskyy, V.; Zhang, H.K.; Vermote, E.F.; Yan, L.; Kumar, S.S.; Egorov, A. Characterization of Landsat-7 to Landsat-8 reflective wavelength and normalized difference vegetation index continuity. Remote Sens. Environ. 2016, 185, 57–70. [Google Scholar] [CrossRef] [Green Version]
  143. Nativi, S.; Mazzetti, P.; Craglia, M. A view-based model of data-cube to support big earth data systems interoperability. Big Earth Data 2017, 1, 75–99. [Google Scholar] [CrossRef] [Green Version]
  144. Forkuor, G.; Dimobe, K.; Serme, I.; Tondoh, J.E. Landsat-8 vs. Sentinel-2: Examining the added value of sentinel-2’s red-edge bands to land-use and land-cover mapping in Burkina Faso. GIScience Remote Sens. 2018, 55, 331–354. [Google Scholar] [CrossRef]
  145. Vuolo, F.; Neuwirth, M.; Immitzer, M.; Atzberger, C.; Ng, W.T. How much does multi-temporal Sentinel-2 data improve crop type classification? Int. J. Appl. Earth Obs. Geoinf. 2018, 72, 122–130. [Google Scholar] [CrossRef]
  146. Immitzer, M.; Vuolo, F.; Atzberger, C. First, experience with Sentinel-2 data for crop and tree species classifications in central Europe. Remote Sens. 2016, 8, 166. [Google Scholar] [CrossRef]
  147. Ramoelo, A.; Cho, M.; Mathieu, R.; Skidmore, A.K. Potential of Sentinel-2 spectral configuration to assess rangeland quality. J. Appl. Remote Sens. 2015, 9, 094096. [Google Scholar] [CrossRef] [Green Version]
  148. Macintyre, P.; van Niekerk, A.; Mucina, L. Efficacy of multi-season Sentinel-2 imagery for compositional vegetation classification. Int. J. Appl. Earth Obs. Geoinf. 2020, 85, 101980. [Google Scholar] [CrossRef]
  149. Persson, M.; Lindberg, E.; Reese, H. Tree Species Classification with Multi-Temporal Sentinel-2 Data. Remote Sens. 2018, 10, 1794. [Google Scholar] [CrossRef] [Green Version]
  150. Sothe, C.; de Almeida, C.M.; Liesenberg, V.; Schimalski, M.B. Evaluating Sentinel-2 and Landsat-8 data to map sucessional forest stages in a subtropical forest in Southern Brazil. Remote Sens. 2017, 9, 838. [Google Scholar] [CrossRef] [Green Version]
  151. Abdi, A.M. Land cover and land use classification performance of machine learning algorithms in a boreal landscape using Sentinel-2 data. GIScience Remote Sens. 2019, 57, 1–20. [Google Scholar] [CrossRef] [Green Version]
  152. Kobayashi, N.; Tani, H.; Wang, X.; Sonobe, R. Crop classification using spectral indices derived from Sentinel-2A imagery. J. Inf. Telecommun. 2019, 4, 67–90. [Google Scholar] [CrossRef]
  153. Jakimow, B.; Griffiths, P.; van der Linden, S.; Hostert, P. Mapping pasture management in the Brazilian Amazon from dense Landsat time series. Remote Sens. Environ. 2018, 205, 453–468. [Google Scholar] [CrossRef]
  154. Wilson, E.H.; Sader, S.A. Detection of forest harvest type using multiple dates of Landsat TM imagery. Remote Sens. Environ. 2002, 80, 385–396. [Google Scholar] [CrossRef]
  155. Garcia, M.J.L.; Caselles, V. Mapping burns and natural reforestation using thematic mapper data. Geocarto Int. 1991, 6, 31–37. [Google Scholar] [CrossRef]
  156. Müller, H.; Rufin, P.; Griffiths, P.; Barros Siqueira, A.J.; Hostert, P. Mining dense Landsat time series for separating cropland and pasture in a heterogeneous Brazilian savanna landscape. Remote Sens. Environ. 2015, 156, 490–499. [Google Scholar] [CrossRef] [Green Version]
  157. Radoux, J.; Chomé, G.; Jacques, D.C.; Waldner, F.; Bellemans, N.; Matton, N.; Lamarche, C.; D’Andrimont, R.; Defourny, P. Sentinel-2’s potential for sub-pixel landscape feature detection. Remote Sens. 2016, 8, 488. [Google Scholar] [CrossRef] [Green Version]
  158. Lambert, M.J.; Sibiry, P.C.; Blaes, X.; Baret, P.; Defourny, P. Estimating smallholder crops production at village level from Sentinel-2 time series in Mali’s cotton belt. Remote Sens. Environ. 2018, 216, 647–657. [Google Scholar] [CrossRef]
  159. Cai, Y.; Guan, K.; Peng, J.; Wang, S.; Seifert, C.; Wardlow, B.; Li, Z. A high-performance and in-season classification system of field-level crop types using time-series Landsat data and a machine learning approach. Remote Sens. Environ. 2018, 210, 35–47. [Google Scholar] [CrossRef]
  160. Xiao, X.; Boles, S.; Liu, J.; Zhuang, D.; Liu, M. Characterization of forest types in Northeastern China, using multi-temporal SPOT-4 VEGETATION sensor data. Remote Sens. Environ. 2002, 82, 335–348. [Google Scholar] [CrossRef]
  161. Meng, S.; Zhong, Y.; Luo, C.; Hu, X.; Wang, X.; Huang, S. Optimal Temporal Window Selection for Winter Wheat and Rapeseed Mapping with Sentinel-2 Images: A Case Study of Zhongxiang in China. Remote Sens. 2020, 12, 226. [Google Scholar] [CrossRef] [Green Version]
  162. Immitzer, M.; Neuwirth, M.; Böck, S.; Brenner, H.; Vuolo, F.; Atzberger, C. Optimal Input Features for Tree Species Classification in Central Europe Based on Multi-Temporal Sentinel-2 Data. Remote Sens. 2019, 11, 2599. [Google Scholar] [CrossRef] [Green Version]
  163. Piyoosh, A.K.; Ghosh, S.K. Development of a modified bare soil and urban index for Landsat 8 satellite data. Geocarto Int. 2018, 33, 423–442. [Google Scholar] [CrossRef]
  164. Rasul, A.; Balzter, H.; Ibrahim, G.; Hameed, H.; Wheeler, J.; Adamu, B.; Ibrahim, S.; Najmaddin, P. Applying Built-Up and Bare-Soil Indices from Landsat 8 to Cities in Dry Climates. Land 2018, 7, 81. [Google Scholar] [CrossRef] [Green Version]
  165. Filella, I.; Peñuelas, J. The red edge position and shape as indicators of plant chlorophyll content, biomass and hydric status. Int. J. Remote Sens. 1994, 15, 1459–1470. [Google Scholar] [CrossRef]
  166. Clerici, N.; Valbuena Calderón, C.A.; Posada, J.M. Fusion of Sentinel-1A and Sentinel-2A data for land cover mapping: A case study in the lower Magdalena region, Colombia. J. Maps 2017, 13, 718–726. [Google Scholar] [CrossRef] [Green Version]
  167. Cao, Z.; Yao, X.; Liu, H.; Liu, B.; Cheng, T.; Tian, Y.; Cao, W.; Zhu, Y. Comparison of the abilities of vegetation indices and photosynthetic parameters to detect heat stress in wheat. Agric. For. Meteorol. 2019, 265, 121–136. [Google Scholar] [CrossRef]
  168. Dong, T.; Liu, J.; Shang, J.; Qian, B.; Ma, B.; Kovacs, J.M.; Walters, D.; Jiao, X.; Geng, X.; Shi, Y. Assessment of red-edge vegetation indices for crop leaf area index estimation. Remote Sens. Environ. 2019, 222, 133–143. [Google Scholar] [CrossRef]
  169. Lin, S.; Li, J.; Liu, Q.; Li, L.; Zhao, J.; Yu, W. Evaluating the Effectiveness of Using Vegetation Indices Based on Red-Edge Reflectance from Sentinel-2 to Estimate Gross Primary Productivity. Remote Sens. 2019, 11, 1303. [Google Scholar] [CrossRef] [Green Version]
  170. Gitelson, A.; Merzlyak, M.N. Spectral Reflectance Changes Associated with Autumn Senescence of Aesculus hippocastanum L. and Acer platanoides L. Leaves. Spectral Features and Relation to Chlorophyll Estimation. J. Plant Physiol. 1994, 143, 286–292. [Google Scholar] [CrossRef]
  171. Shoko, C.; Mutanga, O. Examining the strength of the newly-launched Sentinel 2 MSI sensor in detecting and discriminating subtle differences between C3 and C4 grass species. ISPRS J. Photogramm. Remote Sens. 2017, 129, 32–40. [Google Scholar] [CrossRef]
  172. Gitelson, A.A.; Gritz, Y.; Merzlyak, M.N. Relationships between leaf chlorophyll content and spectral reflectance and algorithms for non-destructive chlorophyll assessment in higher plant leaves. J. Plant Physiol. 2003, 160, 271–282. [Google Scholar] [CrossRef]
  173. Herrmann, I.; Pimstein, A.; Karnieli, A.; Cohen, Y.; Alchanatis, V.; Bonfil, D. LAI assessment of wheat and potato crops by VENμS and Sentinel-2 bands. Remote Sens. Environ. 2011, 115, 2141–2151. [Google Scholar] [CrossRef]
  174. Osgouei, P.E.; Kaya, S.; Sertel, E.; Alganci, U. Separating built-up areas from bare land in mediterranean cities using Sentinel-2A imagery. Remote Sens. 2019, 11, 345. [Google Scholar] [CrossRef] [Green Version]
  175. Vincent, A.; Kumar, A.; Upadhyay, P. Effect of Red-Edge Region in Fuzzy Classification: A Case Study of Sunflower Crop. J. Indian Soc. Remote Sens. 2020, 48, 645–657. [Google Scholar] [CrossRef]
  176. Hardisky, M.; Klemas, V.; Smart, M. The influence of soil salinity, growth form, and leaf moisture on the spectral radiance of Spartina Alterniflora Canopies. Photogramm. Eng. Remote Sens. 1983, 49, 77–83. [Google Scholar]
  177. Son, N.T.; Chen, C.F.; Chen, C.R.; Guo, H.Y. Classification of multitemporal Sentinel-2 data for field-level monitoring of rice cropping practices in Taiwan. Adv. Space Res. 2020, 65, 1910–1921. [Google Scholar] [CrossRef]
  178. Carrasco, L.; O’Neil, A.; Morton, R.; Rowland, C. Evaluating Combinations of Temporally Aggregated Sentinel-1, Sentinel-2 and Landsat 8 for Land Cover Mapping with Google Earth Engine. Remote Sens. 2019, 11, 288. [Google Scholar] [CrossRef] [Green Version]
  179. Gerard, F.; Plummer, S.; Wadsworth, R.; Sanfeliu, A.F.; Iliffe, L.; Balzter, H.; Wyatt, B. Forest Fire Scar Detection in the Boreal Forest with Multitemporal SPOT-VEGETATION Data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2575–2585. [Google Scholar] [CrossRef]
  180. Van Deventer, A.; Ward, A.; Gowda, P.; Lyon, J. Using thematic mapper data to identify contrasting soil plains and tillage practices. Photogramm. Eng. Remote Sens. 1997, 63, 87–93. [Google Scholar]
  181. Gitelson, A.A.; Viña, A.; Arkebauer, T.J.; Rundquist, D.C.; Keydan, G.; Leavitt, B. Remote estimation of leaf area index and green leaf biomass in maize canopies. Geophys. Res. Lett. 2003, 30. [Google Scholar] [CrossRef] [Green Version]
  182. Cai, Y.; Lin, H.; Zhang, M. Mapping paddy rice by the object-based random forest method using time series Sentinel-1/Sentinel-2 data. Adv. Space Res. 2019, 64, 2233–2244. [Google Scholar] [CrossRef]
  183. Chatziantoniou, A.; Psomiadis, E.; Petropoulos, G. Co-Orbital Sentinel 1 and 2 for LULC Mapping with Emphasis on Wetlands in a Mediterranean Setting Based on Machine Learning. Remote Sens. 2017, 9, 1259. [Google Scholar] [CrossRef] [Green Version]
  184. Azzari, G.; Lobell, D. Landsat-based classification in the cloud: An opportunity for a paradigm shift in land cover monitoring. Remote Sens. Environ. 2017, 202, 64–74. [Google Scholar] [CrossRef]
  185. Zhong, L.; Hawkins, T.; Biging, G.; Gong, P. A phenology-based approach to map crop types in the San Joaquin Valley, California. Int. J. Remote Sens. 2011, 32, 7777–7804. [Google Scholar] [CrossRef]
  186. Yan, L.; Roy, D.P. Improved time series land cover classification by missing-observation-adaptive nonlinear dimensionality reduction. Remote Sens. Environ. 2015, 158, 478–491. [Google Scholar] [CrossRef] [Green Version]
  187. Misra, G.; Cawkwell, F.; Wingler, A. Status of Phenological Research Using Sentinel-2 Data: A Review. Remote Sens. 2020, 12, 2760. [Google Scholar] [CrossRef]
  188. Nguyen, L.H.; Joshi, D.R.; Clay, D.E.; Henebry, G.M. Characterizing land cover/land use from multiple years of Landsat and MODIS time series: A novel approach using land surface phenology modeling and random forest classifier. Remote Sens. Environ. 2020, 238, 111017. [Google Scholar] [CrossRef]
  189. Pastick, N.J.; Dahal, D.; Wylie, B.K.; Parajuli, S.; Boyte, S.P.; Wu, Z. Characterizing Land Surface Phenology and Exotic Annual Grasses in Dryland Ecosystems Using Landsat and Sentinel-2 Data in Harmony. Remote Sens. 2020, 12, 725. [Google Scholar] [CrossRef] [Green Version]
  190. Schmidt, M.; Pringle, M.; Devadas, R.; Denham, R.; Tindall, D. A Framework for Large-Area Mapping of Past and Present Cropping Activity Using Seasonal Landsat Images and Time Series Metrics. Remote Sens. 2016, 8, 312. [Google Scholar] [CrossRef] [Green Version]
  191. Peña, M.A.; Brenning, A. Assessing fruit-tree crop classification from Landsat-8 time series for the Maipo Valley, Chile. Remote Sens. Environ. 2015, 171, 234–244. [Google Scholar] [CrossRef]
  192. Htitiou, A.; Boudhar, A.; Lebrini, Y.; Hadria, R.; Lionboui, H.; Elmansouri, L.; Tychon, B.; Benabdelouahab, T. The Performance of Random Forest Classification Based on Phenological Metrics Derived from Sentinel-2 and Landsat 8 to Map Crop Cover in an Irrigated Semi-arid Region. Remote Sens. Earth Syst. Sci. 2019, 2, 208–224. [Google Scholar] [CrossRef]
  193. Zhang, X.; Xiong, Q.; Di, L.; Tang, J.; Yang, J.; Wu, H.; Qin, Y.; Su, R.; Zhou, W. Phenological metrics-based crop classification using HJ-1 CCD images and Landsat 8 imagery. Int. J. Digit. Earth 2017, 11, 1219–1240. [Google Scholar] [CrossRef]
  194. Parente, L.; Mesquita, V.; Miziara, F.; Baumann, L.; Ferreira, L. Assessing the pasturelands and livestock dynamics in Brazil, from 1985 to 2017: A novel approach based on high spatial resolution imagery and Google Earth Engine cloud computing. Remote Sens. Environ. 2019, 232, 111301. [Google Scholar] [CrossRef]
  195. Santos, C.L.M.d.O.; Lamparelli, R.A.C.; Figueiredo, G.K.D.A.; Dupuy, S.; Boury, J.; Luciano, A.C.d.S.; Torres, R.d.S.; le Maire, G. Classification of crops, pastures, and tree plantations along the season with multi-sensor image time series in a subtropical agricultural region. Remote Sens. 2019, 11, 334. [Google Scholar] [CrossRef] [Green Version]
  196. Zeferino, L.B.; de Souza, L.F.T.; do Amaral, C.H.; Filho, E.I.F.; de Oliveira, T.S. Does environmental data increase the accuracy of land use and land cover classification? Int. J. Appl. Earth Obs. Geoinf. 2020, 91, 102128. [Google Scholar] [CrossRef]
  197. Zhao, Y.; Feng, D.; Yu, L.; Wang, X.; Chen, Y.; Bai, Y.; Hernández, H.J.; Galleguillos, M.; Estades, C.; Biging, G.S.; et al. Detailed dynamic land cover mapping of Chile: Accuracy improvement by integrating multi-temporal data. Remote Sens. Environ. 2016, 183, 170–185. [Google Scholar] [CrossRef]
  198. Zhu, Z.; Gallant, A.L.; Woodcock, C.E.; Pengra, B.; Olofsson, P.; Loveland, T.R.; Jin, S.; Dahal, D.; Yang, L.; Auch, R.F. Optimizing selection of training and auxiliary data for operational land cover classification for the LCMAP initiative. ISPRS J. Photogramm. Remote Sens. 2016, 122, 206–221. [Google Scholar] [CrossRef] [Green Version]
  199. Gilbertson, J.K.; van Niekerk, A. Value of dimensionality reduction for crop differentiation with multi-temporal imagery and machine learning. Comput. Electron. Agric. 2017, 142, 50–58. [Google Scholar] [CrossRef]
  200. Kopp, S.; Becker, P.; Doshi, A.; Wright, D.J.; Zhang, K.; Xu, H. Achieving the Full Vision of Earth Observation Data Cubes. Data 2019, 4, 94. [Google Scholar] [CrossRef] [Green Version]
  201. Appel, M.; Pebesma, E. On-Demand Processing of Data Cubes from Satellite Image Collections with the Gdalcubes Library. Data 2019, 4, 92. [Google Scholar] [CrossRef] [Green Version]
  202. Giuliani, G.; Masó, J.; Mazzetti, P.; Nativi, S.; Zabala, A. Paving the Way to Increased Interoperability of Earth Observations Data Cubes. Data 2019, 4, 113. [Google Scholar] [CrossRef] [Green Version]
  203. Dwyer, J.L.; Roy, D.P.; Sauer, B.; Jenkerson, C.B.; Zhang, H.K.; Lymburner, L. Analysis Ready Data: Enabling Analysis of the Landsat Archive. Remote Sens. 2018, 10, 1363. [Google Scholar] [CrossRef]
  204. Camara, G.; Assis, L.F.; Ribeiro, G.; Ferreira, K.R.; Llapa, E.; Vinhas, L. Big Earth Observation Data Analytics: Matching Requirements to System Architectures. In Proceedings of the BigSpatial ’16, 5th ACM SIGSPATIAL International Workshop on Analytics for Big Geospatial Data, San Francisco, CA, USA, 31 October 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 1–6. [Google Scholar] [CrossRef]
  205. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202. [Google Scholar] [CrossRef]
  206. Dhu, T.; Dunn, B.; Lewis, B.; Lymburner, L.; Mueller, N.; Telfer, E.; Lewis, A.; McIntyre, A.; Minchin, S.; Phillips, C. Digital Earth Australia—Unlocking new value from earth observation data. Big Earth Data 2017, 1, 64–74. [Google Scholar] [CrossRef] [Green Version]
  207. Augustin, H.; Sudmanns, M.; Tiede, D.; Lang, S.; Baraldi, A. Semantic Earth Observation Data Cubes. Data 2019, 4, 102. [Google Scholar] [CrossRef] [Green Version]
  208. Mahecha, M.D.; Gans, F.; Brandt, G.; Christiansen, R.; Cornell, S.E.; Fomferra, N.; Kraemer, G.; Peters, J.; Bodesheim, P.; Camps-Valls, G.; et al. Earth system data cubes unravel global multivariate dynamics. Earth Syst. Dyn. 2020, 11, 201–234. [Google Scholar] [CrossRef] [Green Version]
  209. Potapov, P.; Hansen, M.C.; Kommareddy, I.; Kommareddy, A.; Turubanova, S.; Pickens, A.; Adusei, B.; Tyukavina, A.; Ying, Q. Landsat Analysis Ready Data for Global Land Cover and Land Cover Change Mapping. Remote Sens. 2020, 12, 426. [Google Scholar] [CrossRef] [Green Version]
  210. Qiu, S.; Lin, Y.; Shang, R.; Zhang, J.; Ma, L.; Zhu, Z. Making Landsat Time Series Consistent: Evaluating and Improving Landsat Analysis Ready Data. Remote Sens. 2019, 11, 51. [Google Scholar] [CrossRef] [Green Version]
  211. Poortinga, A.; Tenneson, K.; Shapiro, A.; Nquyen, Q.; Aung, K.S.; Chishtie, F.; Saah, D. Mapping plantations in Myanmar by fusing Landsat-8, Sentinel-2 and Sentinel-1 data along with systematic error quantification. Remote Sens. 2019, 11, 831. [Google Scholar] [CrossRef] [Green Version]
  212. Kaplan, G.; Avdan, U. Evaluating the utilization of the red-edge and radar bands from sentinel sensors for wetland classification. CATENA 2019, 178, 109–119. [Google Scholar] [CrossRef]
  213. Chen, N. Mapping mangrove in Dongzhaigang, China using Sentinel-2 imagery. J. Appl. Remote Sens. 2020, 14, 1. [Google Scholar] [CrossRef]
  214. Diniz, C.; Cortinhas, L.; Nerino, G.; Rodrigues, J.; Sadeck, L.; Adami, M.; Souza-Filho, P. Brazilian Mangrove Status: Three Decades of Satellite Data Analysis. Remote Sens. 2019, 11, 808. [Google Scholar] [CrossRef] [Green Version]
  215. Manna, S.; Raychaudhuri, B. Mapping distribution of Sundarban mangroves using Sentinel-2 data and new spectral metric for detecting their health condition. Geocarto Int. 2018, 35, 434–452. [Google Scholar] [CrossRef]
  216. Crabbe, R.A.; Lamb, D.; Edwards, C. Discrimination of species composition types of a grazed pasture landscape using Sentinel-1 and Sentinel-2 data. Int. J. Appl. Earth Obs. Geoinf. 2020, 84, 101978. [Google Scholar] [CrossRef]
  217. Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K.I. Assessing the suitability of data from Sentinel-1A and 2A for crop classification. GIScience Remote Sens. 2017, 54, 918–938. [Google Scholar] [CrossRef]
  218. Mansaray, L.R.; Wang, F.; Huang, J.; Yang, L.; Kanu, A.S. Accuracies of support vector machine and random forest in rice mapping with Sentinel-1A, Landsat-8 and Sentinel-2A datasets. Geocarto Int. 2020, 35, 1088–1108. [Google Scholar] [CrossRef]
  219. Mansaray, L.R.; Yang, L.; Kabba, V.T.S.; Kanu, A.S.; Huang, J.; Wang, F. Optimising rice mapping in cloud-prone environments by combining quad-source optical with Sentinel-1A microwave satellite imagery. GIScience Remote Sens. 2019, 56, 1333–1354. [Google Scholar] [CrossRef]
  220. Onojeghuo, A.O.; Blackburn, G.A.; Wang, Q.; Atkinson, P.M.; Kindred, D.; Miao, Y. Mapping paddy rice fields by applying machine learning algorithms to multi-temporal Sentinel-1A and Landsat data. Int. J. Remote Sens. 2017, 39, 1042–1067. [Google Scholar] [CrossRef] [Green Version]
  221. Waldner, F.; Duveiller, G.; Defourny, P. Local adjustments of image spatial resolution to optimize large-area mapping in the era of big data. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 374–385. [Google Scholar] [CrossRef]
  222. Capolupo, A.; Monterisi, C.; Tarantino, E. Landsat Images Classification Algorithm (LICA) to Automatically Extract Land Cover Information in Google Earth Engine Environment. Remote Sens. 2020, 12, 1201. [Google Scholar] [CrossRef] [Green Version]
  223. García-Berná, J.A.; Ouhbi, S.; Benmouna, B.; García-Mateos, G.; Fernández-Alemán, J.L.; Molina-Martínez, J.M. Systematic Mapping Study on Remote Sensing in Agriculture. Appl. Sci. 2020, 10, 3456. [Google Scholar] [CrossRef]
Figure 1. Schematic process to select the papers from 2015 to 2020 presented in this systematic review.
Figure 1. Schematic process to select the papers from 2015 to 2020 presented in this systematic review.
Remotesensing 12 03062 g001
Figure 2. Global distribution of the research groups responsible for the papers selected, highlighting the 15 countries with the greatest number of publications.
Figure 2. Global distribution of the research groups responsible for the papers selected, highlighting the 15 countries with the greatest number of publications.
Remotesensing 12 03062 g002
Figure 3. Top 15 journals where the selected papers were published.
Figure 3. Top 15 journals where the selected papers were published.
Remotesensing 12 03062 g003
Figure 4. Scientific network mapping of the most used keywords terms in the selected papers. Colors indicate the cluster in which each keyword was related the most. Lines represent co-occurrence link strength among terms.
Figure 4. Scientific network mapping of the most used keywords terms in the selected papers. Colors indicate the cluster in which each keyword was related the most. Lines represent co-occurrence link strength among terms.
Remotesensing 12 03062 g004
Figure 5. Scientific network mapping of the papers’ keywords. Colors indicate the year in which each keyword was more used. Lines represent co-occurrence link strength among terms.
Figure 5. Scientific network mapping of the papers’ keywords. Colors indicate the year in which each keyword was more used. Lines represent co-occurrence link strength among terms.
Remotesensing 12 03062 g005
Figure 6. Global distribution of the study areas considered for applications in the selected papers, highlighting the 15 countries with the greatest number of publications.
Figure 6. Global distribution of the study areas considered for applications in the selected papers, highlighting the 15 countries with the greatest number of publications.
Remotesensing 12 03062 g006
Figure 7. Scientific network mapping of the most used keywords in papers focusing on future possibilities in LULC and LULCC analysis. Colors indicate the cluster in which each keyword was more related. Lines represent co-occurrence link strength among terms.
Figure 7. Scientific network mapping of the most used keywords in papers focusing on future possibilities in LULC and LULCC analysis. Colors indicate the cluster in which each keyword was more related. Lines represent co-occurrence link strength among terms.
Remotesensing 12 03062 g007
Figure 8. Scientific network mapping summarizing applications and approaches adopted with L8/OLI and S2/MSI data. Colors indicate the cluster in which each term was related the most. Lines represent co-occurrence link strength among terms.
Figure 8. Scientific network mapping summarizing applications and approaches adopted with L8/OLI and S2/MSI data. Colors indicate the cluster in which each term was related the most. Lines represent co-occurrence link strength among terms.
Remotesensing 12 03062 g008
Table 1. Characteristics of L8/OLI and S2/MSI sensors.
Table 1. Characteristics of L8/OLI and S2/MSI sensors.
Spectral BandL8/OLI Central
Wavelengths
S2/MSI Central
Wavelengths (2A, 2B)
Coastal/Aerosol442.9 nm (30 m)442.7 nm (60 m), 442.2 nm (60 m)
Blue482 nm (30 m)492.4 nm (10 m), 492.1 nm (10 m)
Green561.4 nm (30 m)559.8 nm (10 m), 559 nm (10 m)
Red654.6 nm (30 m)664.6 nm (10 m), 664.9 nm (10 m)
Red-edge-704.1 nm (20 m), 703.8 nm (20 m)
740.5 nm (20 m), 739.1 nm (20 m)
782.8 nm (20 m), 779.7 nm (20 m)
NIR864.7 nm (30 m)832.8 nm (10 m), 832.9 nm (20 m)
864.7 nm (20 m), 864.0 nm (20 m)
SWIR1608.9 nm (30 m)
2200.7 nm (30 m)
1613.7 nm (20 m), 1610.4 nm (20 m)
2202.4 nm (20 m), 2185.7 nm (20 m)
Panchromatic589.5 nm (15 m)-
Cirrus1373.4 nm (30 m)1373.5 nm (60 m), 1376.9 nm (60 m)
Water vapor-945.1 nm (60 m), 943.2 nm (60 m)
Source: ESA [31] and NASA [32].
Table 2. Schematic process to filter, select, and organize the papers.
Table 2. Schematic process to filter, select, and organize the papers.
FieldsDefinitionCategories
YearYear of publicationYear
Source titleJournalJournal name
InstitutionNameInstitution Name
AreaMapped areaCountry, area of interest
Sensor typeSensor usedS2/MSI, L8/OLI, both
Sampling strategySampling adoptedStratified random sampling,
simple random sampling, others
Site typeStudy areaAgricultural lands, Natural vegetation
Classification methodMethod adoptedRF, SVM, others
Accuracy measureAccuracy index and valuesOA, Kappa, F-score, and % of accuracy
Source: Adapted from Ma et al. [26].
Table 3. Robustness of different and less frequent VIs to provide accurate mapping of different LULC classes.
Table 3. Robustness of different and less frequent VIs to provide accurate mapping of different LULC classes.
Class/ParameterSpectral Vegetation IndicesAttested by
Natural vegetationNDVIre, MNDWI, SRI, ARI, CIre,
GEMI, NDBI, MIRBI, NDFI, NDMI
Feng et al. (2015), Pelletier et al. (2016), Puletti et al. (2017), Lin et al. (2019),
Forkuor et al. (2018), Schultz et al. (2016), Carrasco et al. (2019)
SoybeanNDVIre, LSWI, GCVI, CIre,
MSRre, SWIRIndex, PSRI, S2REP
Cai et al. (2018), Torbick et al. (2018), Müller et al. (2015), Liu et al. (2020)
Csillik et al. (2019), Defourny et al. (2019), Niazmardi et al. (2018)
MaizeLSWI, RedSWIR, TVI, NDTI, NDVIre, GCVI,
CRI700, IRECI, S2REP, CIre, CIred&re, MSR,
MSRre, MSRredre, NDVIre2n, SIPI, REIP, PPR, NGRDI
Radoux et al. (2016), Sun et al. (2019,2020), Cai et al. (2018), Xie et al. (2018),
Frampton et al. (2013), Kobayashi et al. (2019), Vincent et al. (2020)
CottonNDVIre, LSWI, CIgreen, TVI, STI, NDTITorbick et al. (2018), Lambert et al. (2018)
BeetrootAFRI1.6, SIWSI, NDII, PVR, mNDVISonobe et al. (2018), Kobayashi et al. (2019)
PotatoREIP, CVI, Maccioni, Datt1Sonobe et al. (2018), Kobayashi et al. (2019)
WheatNDVIre, NDVIre2, MSR, MSRre, MSRredre, CIre, CIred&re,
PSRI, S2REP, REIP, AVI, SIPI, PVR, mNDVI, GARI, VARIgreen
Csillik and Belgiu (2016), Xie et al. (2018), Defourny et al. (2019),
Kobayashi et al. (2019), Vincent et al. (2020), Sonobe et al. (2018)
BarleyCIred&re, NDVIre, MSR, MSRredre, PSRI, S2REPXie et al. (2018), Defourny et al. (2019), Vincent et al. (2020)
AlfalfaCIred&re, NDVIre, CIre, MSR, MSRre, MSRredre, LSWICsillik et al. (2019), Xie et al. (2018), Torbick et al. (2018), Vincent et al. (2020)
MilletNDVIre, CIre, CIgreen, STILambert et al. (2018), Forkuor et al. (2018)
SorghumNDVIre, MNDWI, CIgreen, TVI, CIreForkuor et al. (2018), Lambert et al. (2018)
SunflowerNDVIre, PSRI, S2REP, CIR&RE, SAVIr&reNiazmardi et al. (2018), Defourny et al. (2019), Vincent et al. (2020)
RiceLSWI, NDVIre, RERVI, CIre, MNDWITorbick et al. (2018), Cao et al. (2019), Mansaray et al. (2019), Son et al. (2020)
Sugar beetRedSWIR, NDVIreRadoux et al. (2016) Csillik and Belgiu (2016)
Beans and groundnutsNDVIre, MNDWI, PVR, REIP, GEMI, Datt3 mNDVI, VARIgreenForkuor et al. (2018), Kobayashi et al. (2019)
Tomato/Chili pepperCIre, NDVIre1n, NDVIre2n, MSRre, MSRrenSun et al. (2020)
Bare soilMNDWI, NDBaI, NDBI, NDTI, WDVI, GEMI, DBI, MNDSI
DBSI, SRNIRnarrowRed, SRNIRnarrowGreen, RTVIcore, NRUI
Osgouei et al. (2019), Radoux et al. (2016),
Piyoosh and Gosh (2018), Rasul et al. (2018), Liu et al. (2020)
Grasslands, shrublands,
rangelands and pastures
SWIRindex, IRECI, RedSWIR, NDSWIR, NBR, NBR-2,
BAI, MNDWI, NDMI, REIP, MNSI, AFRI1.6
Müller et al. (2015), Radoux et al. (2016), Forkuor et al. (2018),
Jakimow et al. (2018), Carrasco et al. (2019), Sonobe et al. (2018)
MangroveCMRI, MMRI, DNVIChen (2020), Diniz et al. (2020), Manna and Raychaudhuri (2018)
Water bodiesMNDWI, NHI, VSDIRadoux et al. (2016), Pelletier et al. (2016), Rasul et al. (2018),
Feng et al. (2015), Forkuor et al. (2018), Guttler et al. (2017)
Settlements and
built-up areas
MNDWI, NDBI, MNDBI,
IBI, DBI, DBSI, NRUI
Forkuor et al. (2018), Pelletier et al. (2016),
Mansaray et al. (2018), Piyoosh and Gosh (2018), Rasul et al. (2018)

Share and Cite

MDPI and ACS Style

E. D. Chaves, M.; C. A. Picoli, M.; D. Sanches, I. Recent Applications of Landsat 8/OLI and Sentinel-2/MSI for Land Use and Land Cover Mapping: A Systematic Review. Remote Sens. 2020, 12, 3062. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12183062

AMA Style

E. D. Chaves M, C. A. Picoli M, D. Sanches I. Recent Applications of Landsat 8/OLI and Sentinel-2/MSI for Land Use and Land Cover Mapping: A Systematic Review. Remote Sensing. 2020; 12(18):3062. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12183062

Chicago/Turabian Style

E. D. Chaves, Michel, Michelle C. A. Picoli, and Ieda D. Sanches. 2020. "Recent Applications of Landsat 8/OLI and Sentinel-2/MSI for Land Use and Land Cover Mapping: A Systematic Review" Remote Sensing 12, no. 18: 3062. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12183062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop