Next Article in Journal
UAV-Based Classification of Intercropped Forage Cactus: A Comparison of RGB and Multispectral Sample Spaces Using Machine Learning in an Irrigated Area
Previous Article in Journal
Assessment of a Low-Cost Hydrogen Sensor for Detection and Monitoring of Biohydrogen Production during Sugarcane Straw/Vinasse Co-Digestion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Maize Crop Detection through Geo-Object-Oriented Analysis Using Orbital Multi-Sensors on the Google Earth Engine Platform

by
Ismael Cavalcante Maciel Junior
1,
Rivanildo Dallacort
2,
Cácio Luiz Boechat
3,
Paulo Eduardo Teodoro
4,*,
Larissa Pereira Ribeiro Teodoro
4,
Fernando Saragosa Rossi
4,
José Francisco de Oliveira-Júnior
5,
João Lucas Della-Silva
6,
Fabio Henrique Rojo Baio
4,
Mendelson Lima
7 and
Carlos Antonio da Silva Junior
6,*
1
Post-Graduate Program in Biodiversity and Amazonian Agroecosystems (PPGBioAgro), State University of Mato Grosso (UNEMAT), Alta Floresta 78580-000, MT, Brazil
2
Department of Agronomy, State University of Mato Grosso (UNEMAT), Tangará da Serra 78301-532, MT, Brazil
3
Department of Agronomy, Federal University of Piauí (UFPI), Bom Jesus 64900-000, PI, Brazil
4
Department of Agronomy, Federal University of Mato Grosso do Sul (UFMS), Chapadão do Sul 79560-000, MS, Brazil
5
Department of Atmospheric Sciences, Federal University of Alagoas (UFAL), Maceió 57480-000, AL, Brazil
6
Department of Geography, State University of Mato Grosso (UNEMAT), Sinop 78555-000, MT, Brazil
7
Department of Biology, State University of Mato Grosso (UNEMAT), Alta Floresta 78580-000, MT, Brazil
*
Authors to whom correspondence should be addressed.
Submission received: 21 December 2023 / Revised: 14 February 2024 / Accepted: 19 February 2024 / Published: 22 February 2024

Abstract

:
Mato Grosso state is the biggest maize producer in Brazil, with the predominance of cultivation concentrated in the second harvest. Due to the need to obtain more accurate and efficient data, agricultural intelligence is adapting and embracing new technologies such as the use of satellites for remote sensing and geographic information systems. In this respect, this study aimed to map the second harvest maize cultivation areas at Canarana-MT in the crop year 2019/2020 by using geographic object-based image analysis (GEOBIA) with different spatial, spectral, and temporal resolutions. MSI/Sentinel-2, OLI/Landsat-8, MODIS-Terra and MODIS-Aqua, and PlanetScope imagery were used in this assessment. The maize crops mapping was based on cartographic basis from IBGE (Brazilian Institute of Geography and Statistics) and the Google Earth Engine (GEE), and the following steps of image filtering (gray-level co-occurrence matrix—GLCM), vegetation indices calculation, segmentation by simple non-iterative clustering (SNIC), principal component (PC) analysis, and classification by random forest (RF) algorithm, followed finally by confusion matrix analysis, kappa, overall accuracy (OA), and validation statistics. From these methods, satisfactory results were found; with OA from 86.41% to 88.65% and kappa from 81.26% and 84.61% among the imagery systems considered, the GEOBIA technique combined with the SNIC and GLCM spectral and texture feature discriminations and the RF classifier presented a mapping of the corn crop of the study area that demonstrates an improved and aided the performance of automated multispectral image classification processes.

1. Introduction

Mato Grosso state is the biggest maize producer in Brazil, according to Companhia Nacional de Abastecimento (Conab) in its 12° Brazilian grains crop survey (2021/22), presenting an area of near 6.55 million hectares (ha) and 41.62 million tons (t) of grains [1]. It is also worth noting that despite the large domestic consumption, most of the production is destined for exports, which have broken records in recent years [2]. At the state level, the predominance of cultivation occurs on the second harvest, which corresponds to approximately 99% of the maize crop total area [3]. In view of the large production, the crop area’s estimate plays an important role on national demand supply as well as ensuring that transportation and storage capacity are not compromised [4].
Brazilian official crops estimates are based on subjective surveys by Conab and Instituto Brasileiro Geografia e Estatística (IBGE). Most of the agricultural crops data are based on surveys conducted by technical agents, which rely on cultivated areas, production, and economic data from agricultural producers, agricultural inputs sellers, and other related interviews, which are poorly reliable data for such a survey [5].
Field data surveys occur at low frequency and are being gradually reduced by the scarcity of financial and human resources. Moreover, the large territorial extension of the state of Mato Grosso makes the surveys costly and time-consuming. In this way, the use of remote sensing techniques and geographic information systems (GIS) can be applied to avoid the inconveniences related to the search for agricultural production data [6]. In comparison to other productive sectors, agricultural activities face uncertainties and thus demand frequent and large-scale monitoring [7,8].
Remote sensing and geoprocessing are the most use techniques for land use data generation over time, as they allow the evaluation of changes in the landscape [9]. Remote sensing imagery provides important historical series for crop dynamics identification, and spatial resolution allows precision agriculture-level interventions [10] Geographic object-based image analysis (GEOBIA) has emerged as a powerful methodology for image analysis and classification, proving effective in accurately identifying and classifying land cover types, mapping and monitoring deforestation, monitoring vegetation health and growth, predicting crop yields, and classifying remote sensing imagery [11]. By utilizing spectral, spatial, textural and topological features, GEOBIA enables comprehensive image analysis, providing valuable information about the characteristics of and changes in natural and agricultural landscapes [12].
These methodologies offer strong integration with GIS and use advanced machine learning techniques for image classification. By combining the power of random forest (RF) with the insights derived from GEOBIA, which uses a set of decision trees where each tree is trained on a subset of data and features, providing robust and accurate predictions, researchers and practitioners have achieved significant advances in the analysis of remotely sensed imagery, enabling comprehensive understanding and informed decision making [13,14].
The emergence of several cloud computing platforms, which store images captured by a range of satellite sensors as well as geospatial analysis and geoprocessing tools, has expanded access to free images to supports a wide range of remote sensing research. The Google Earth Engine stands out in this scenario [15]. The platform has imagery data from the Sentinel, Landsat, and Terra/Aqua satellites and provides conditions for the development of geospatial algorithms involving large data sets [16].
In view of the economic importance of maize for Canarana and the Mato Grosso state, information regarding the extension of cultivated areas, which is widely disclosed by the official estimates of area and agricultural production made by IBGE and Conab, supports political and economic planning. However, there are currently differences between these agencies’ publications that were evidenced in the 2019/20 harvest. In that crop year, Conab estimated an area of 5414.4 ha [3], while IBGE estimated 5337.3 thousand ha for second-harvest maize cultivation in Mato Grosso [17].
This area difference is one of the factors that contributes to the difference in maize production of about 930,000 tons in the state, which is a significant amount for agribusiness, and this has raised uncertainty in the sector about the numbers released by official agencies. Therefore, this study presents a geoprocessing- and remote-sensing-based approach to mapping maize crops at Canarana in the state of Mato Grosso.

2. Materials and Methods

The mapping of agricultural land with remote sensing tools endorsed by geoprocessing and based on well-stablished, cost-effective, time-efficient, and accurate machine learning was carried out as determined by the definition of approaches of each of the following steps. Here, the maize crops mapping at Canarana in the state of Mato Grosso considered the steps in the following workflow (Figure 1).

2.1. Study Area

The study area comprises the municipality of Canarana, in northeastern Mato Grosso, over the geographic coordinates 12°36′17″ to 13°47′12″ S and 51°22′32″ to 53°06′12″ W (Figure 2). Canarana has an area of 10,855.181 km2 and estimated population of 21.842 inhabitants. The average altitude of the locality is 390 m, and it is characterized by a humid tropical climate (Köppen–Geiger climate classification: Aw), with mean temperature of 25 °C. It exhibits two well-defined seasons, namely dry (from May to September) and rainy (from October to April) [18,19], with an average annual rainfall for the 2019/2020 crop year ranging around 1650.54 mm to 1866.10 mm [20]. The predominant soil class is dystrophic red-yellow latosol [21].
Canarana was involved in the Superintendence for the Development of Amazon (SUDAM), originated in 1972 from a colonizing cooperative company (Cooperativa Colonizadora 31 de Março Ltd.a—COOPERCOL) appliance. The purpose of the project was to attract to the regional rural entrepreneurs (large, medium, and small) as well as the multinational and family producers. Currently, the main economic activities of the municipality are cattle ranching, agriculture (rice, maize, sesame, and soy), and agroindustry. In view of agricultural aptness and phytogeography, the Amazon–Cerrado biomes transition can be seen in the Canarana area, where 34.51% is attributable to the Amazon biome and 65.49% to Cerrado [18,19,22,23].

2.2. Pre-Processing

At first, the IBGE cartographic basis for Canarana was followed, based upon the methodological approach, and then Google Earth Engine (GEE) uploading was carried out. Due to processing limitations in the GEE platform resources, the acquisition, segmentation, and machine learning steps for processing geospatial data were performed in the Google Collaboratory platform (Colab) and can be accessed at https://colab.research.google.com/ (accessed on 12 November 2022). The Google Colab platform is based on the Phyton language in the Jupyter Notebook, allowing free access at runtime to scripts. The PlanetScope NICFI imagery was added to GEE in addition to other multispectral imagery present in Google Earth Engine, namely MODIS Terra/Aqua, OLI/Landsat-8 (Operational Land Imager), and MSI/Sentinel-2 (MultiSpectral Instrument). Multiple imagery-based stages in this methodological approach were utilized for maize crops detection.
Regarding the maize phenological cycle, images from 1 April 2022 to 31 May 2022 were considered since this period represents the stage of greatest vegetative vigor typical of the second-crop maize culture, and these images utilized cloud filtering of up to 35%. In order to obtain a single image to represent the collected period, the median was used.
To reduce the effects of terrain irregularity, before calculating the subsequent spectral indices and image classification, it was necessary to perform topographic correction on GEE [24,25]. This correction was based on a semi-empirical method that takes into account the topography of the area and the solar angle (zenith and azimuth), called the correction method Solar Canopy Sensor+C (SCSc) [25,26,27]. The SCS+C model is based on the canopy, allowing changes in the direction of illumination to be considered during the processing of light correction from inclined to horizontal surfaces.
The digital elevation model (DEM) information was taken from the Shuttle Radar Topography Mission (SRTM); the SRTM V3 (SRTM Plus) product was provided by NASA JPL, with a resolution of 1 arc second (approximately 30 m) [28,29]. The SRTM metadata were used to obtain information such as the satellite angle at solar zenith for topographic correction effectiveness.
Mapping maize crops was dependent on the vegetation indices (VI). Therefore, the NDVI (Equation (1)) was considered since this VI creates an image displaying the green tone (relative biomass), illustrating the contrast of the chlorophyll pigment absorption characteristics in the red band and the high reflectivity of plant materials in the near-infrared band (NIR) [30].
The enhanced vegetation index (EVI) (Equation (2)) was also considered. Despite being similar to Normalized Difference Vegetation Index (NDVI), the EVI is improved in regards to the soil and atmospheric effects for vegetation mapping, and it also does not saturate dense green vegetation areas [30]. Further, the perpendicular vegetation index (PVI) (Equation (3)) allows to nullify the background reflectance of soil a at the crop emergence stage; an important part of this reflectance registered by the sensor refers to exposed soil. Finally, the perpendicular crop enhancement index (PCEI) (Equation (4)) was used to determine the minimum and maximum crop development period [30].
N D V I = ρ N I R ρ R E D ρ N I R + ρ R E D
E V I = g × ρ N I R ρ R E D ρ N I R + c 1 × ρ R E D c 2 × ρ B L U E + 1
P V I = ρ N I R ( a × ρ R E D ) b 1 + a 2
P C E I = g × M a x P V I + S M i n P V I + S M a x P V I + S + M i n P V I + S
where:
ρNIR—Reflectance in the near-infrared spectral range;
ρRED—Reflectance in the red spectral range;
ρBLUE—Reflectance in the blue spectral range;
g—Gain factor (102);
c1—Atmospheric effects correction coefficient for red (6.0);
c2—Atmospheric effects correction coefficient for blue (7.5);
a—Soil line slope (1.17);
b—Soil line intercept (3.37);
MaxPVI—Maximum PVI value observed during the period of maximum maize crop development;
MinPVI—Minimum PVI value observed in the pre-planting and/or emergence period;
S—Enhancement coefficient. (the value of 102 is assigned for the enhancement of the amount of energy deposited in the active layer of the cell due to the reduction of the reflection intensity.)

2.3. Segmentation and Classification

The segmentation and classification process were initially based on textural feature extraction via gray level co-occurrence (GLCM). The GLCM statistical approach relies on the texture features from the distribution of observed intensity combinations at specified positions relative to each other in the same image (Equations (5)–(9)) [31,32]. This algorithm requires an eight-bit gray-level image generated by linearly combining near-infrared, red, and green bands, which leads to eighteen different textural indices [25,33]. From this, three texture models were selected: inverse difference moment (IDM), sum of entropy (SENT), and dissimilarity (DISS) [34].
m e a n = 1 P O b j ( x , y ) P O b j C k ( x , y )
S t a n d a r d   d e v i a t i o n = 1 P O b j x , y P O b j c k x , y 1 P O b j x , y P O b j c k x , y
G L C M   h o m o g e n e i t y = i , j = 0 N 1 P i , j 1 + ( i j ) 2
G L C M   d i s s i m i l a r i t y = i , j = 0 N 1 P i j | i j |
G L C M   e n t r o p y = i , j = 0 N 1 P i j ( l n P i j )
where:
P O b j —{(x, y):(x, y) P O b j } Set of pixels of an image object;
P O b j —Total number of pixels contained in the P O b j ;
ck(x, y)—Pixel value of the image layer (x, y), where (x, y) are pixel coordinates;
i—Row number of the co-occurrence matrix;
j—Column number of the co-occurrence matrix;
P i , j —The normalized value in the cell i, j: P i , j = ( V i , j / i , j = 0 N 1 V i , j ) ;
V i , j —The value in cell i, j of the co-occurrence matrix;
N—The number of rows or columns of the co-occurrence matrix.
After the procedure, the principal component (PC) analysis was carried out in view of data dimensionality reduction to maximize the amount of original information in the smallest number of principal components. Here, the data rely on 16 vectors among bands, vegetation indices, and texture features [25,35].
This process significantly reduces the computational burden for feature extraction by transforming a set of correlated variables (original bands) into distinct uncorrelated variables (principal components) that contain a maximum of primary information, significantly speeding up the maximum likelihood classification process [36]. Moreover, these calculations have been widely applied in remote sensing to classify land use and land cover and their changes [35,37].
Next, the images went through the segmentation step, qualified by the application of the simple non-iterative clustering (SNIC) algorithm [38]. In this algorithm, similar pixels are grouped into image objects, possessing spectral and textural information that will be employed in the classification step. SNIC is an improved version of the simple linear iterative clustering (SLIC) segmentation algorithm that benefits from a non-iterative procedure and imposes the connectivity rule from the initial stage [34].
At first, SNIC initializes the centroids of pixels in the regular image grid, and the pixel distance in dimensional color space and spatial coordinates are used to determine the dependence of each pixel on the centroid. Finally, the integrated spatial and color distances result in efficient, compact, and nearly uniform polygons, providing the identification of objects (clusters) according to the input parameters and generating a multi-band raster including clusters and additional layers containing average values of the input features [25,33,39,40]. The main parameters of the SNIC algorithm are image, size, compression, connectivity, neighborhood size, and seeds [39,40].
The classification based on the random forest (RF) classifier was applied due to its great accuracy classification compared to other classifiers and also considering the dataset traits and the applied methods [25,33,39,40,41,42] in addition to providing a reduced probability of explanatory variables to the training data and fitting them perfectly, regardless of the large number of decision trees, where each element uses a random subset of the training data and a limited number of randomly selected predictor variables [43].

2.4. Segmentation and Classification

We collected 2200 random field points of maize crops and other different land-use and land-cover types. For this, 74 points of water, 8 of cotton, 10 of agricultural expansion area, 9 of recovering area, 6 of urban area, 239 of Cerrado, 7 of crotalaria, 4 of beans, 347 of forest, 247 of sesame, 84 of millet, 725 of second-crop corn, 159 of pasture, 149 of degraded pasture, 14 of fallow soil, 4 of exposed soil, and 104 of sorghum for the RF training and validation were determined (Figure 3). The Locus Map application was used to collect these points with an average precision of 5 m. For each class of samples, we used 70% of the points in the training phase and 30% in the validation phase; these were selected randomly [25,33,40].
After processing and classifying the images and obtaining the thematic maps of the corn areas, we proceeded to the analysis of the numerical confusion matrix, which is responsible for determining the method’s accuracy by comparing the percentage of objects classified by class with the real class verified in the field, indicating a posteriori the correct evaluation and errors among the strata studied. The confusion matrix provides the classification’s overall accuracy (OA) (Equation (10)); the producer accuracy (PA) (Equation (11)) or omission error that, in turn, indicates the probability that the result classified in the image actually represents that category in reality; and the user accuracy (UA) (Equation (12)) or commission error, which indicates the percentage of correctness of a polygon or a true pixel (reference) to have been correctly classified [25,33,39,40,44,45]. In addition to these analyses, kappa coefficient metrics (Equation (13)) were applied to assess the reliability and accuracy of the classified data [46].
O A   ( % ) = i = 1 n P i i N × 100 ,
P A % = P i i P + i × 100
U A % = P i i P i + × 100 ,
k a p p a % = N × i = 1 n P i i i = 1 n ( P i + × P + i )     N 2 i = 1 n ( P i + × P + i )     × 100
where:
n—Total number of columns in the confusion matrix, i.e., the total number of categories;
Pii—Number of correct classifications of the top crop type sample in row i and column i of the confusion matrix;
Pi+—Total number of samples of the crop type in row i;
P+i—Total number of samples of the culture type in column i;
N—Total amount of samples used for verification.

3. Results and Discussion

Through the digital elevation model, two land-use classes were derived with a 12% slope threshold based on the mechanization factor as one of the attributes to be observed in the agricultural aptitude and considering that some machines, mainly harvesters, that are available on the market are adapted for slopes of up to 12% [47,48,49]. The area amount of 10,465.36 km2 with up to 12% slope was obtained, corresponding to 96.55% of the total area of the municipality.
For improving supervised classification results, PC analysis was carried out, and as input, the reflectance data of 16 bands based on red (R), green (G), blue (B), and near-infrared (NIR) bands of the four sensors as well as the terrain slope and the vegetation indices were utilized. Here, the first three components (PC1, PC2, and PC3) were defined, which can express the dataset variance (Table 1).
Our results corroborate other findings [33,35,50] that obtained more than 90% of the information of the original bands in the first three principal components, which can be expressed in false-color composition as shown in Figure 4.
Hereinafter, the segmentation based on first three principal components was carried out. The combination of texture features with vegetation indices, the original bands, and the PC analysis improved the segmentation results [33].
In the segmentation, one of the factors that may have influenced the results was the decrease of the spatial resolution of the MSI/Sentinel-2 and Planet NICFI sensor images to 30 m, making it possible to evaluate the spectral, textural, contextual characteristics, and hierarchical features of all multispectral images. Also, the limitation of the processing capacity of the GEE platform in the free account was a constraint since the platform is of global use, and therefore free access should be limited [33,51,52,53].
In a previous step of the final classification of the algorithm, the accuracy test with different numbers of decision trees in random forest showed different results for the four sensors used, which were analyzed a range from 10 (ten) to 200 (two hundred) trees as the ideal combination in terms of accuracy and computational costs [33].
In view of highest accuracy of random forest classification, the optimal quantities of decision trees for imagery sensor were 120, 110, 150, and 180 for OLI/Landsat-8, MODIS, Planet NICFI, and MSI/Sentinel-2, respectively (Figure 5). This process concomitantly evaluated the importance of each image band for land-use and land-cover classification.
From the classified data (Figure 6 and Figure 7), the confusion matrices were generated with OA from 86.41% to 88.65% (Figure 8, Figure 9, Figure 10 and Figure 11; Table 2). The Landsat imagery had the highest accuracy, with OA ranging from 88.65% to 84.61% and PAs between 65.16% and 91.53%. Yet, the lower accuracy relied on the “other land uses” class, and the higher corresponded to the second-harvest maize class, where UA for this class was 91.98%. [53,54,55].
The classification of the second-harvest corn areas with Planet images deviated from the results of the other sensors, with a total area for this class of 330,000 hectares, indicating that despite the higher spatial resolution, the classification algorithm confused the second-harvest corn crop with the other second-harvest crop (Table 2). Considering the analysis of variance and Tukey test’s for means comparison, there was no significant difference at 5% probability among the results of the classifications performed, implying that the responses of the sensors were statistically similar (Table 3).
Despite the lower spatial resolution in relation to the other sensors, MODIS images showed values very close to those of the OLI/Landsat sensor, obtaining OA and kappa coefficients of 86.83% and 82.01% (Table 2), respectively, with the focus class of the study having CA (consumer accuracy) of 90.85% and PA (producer accuracy) of 89.40%.
The independence of spatial resolution with LULC classification accuracy were seen in the comparison between the OA and kappa coefficient results of MODIS and MSI/Sentinel-2 sensors, indicating that high-spatial-resolution data are not always superior to low-resolution data in identifying land uses when objects have varied and complex attributes [25,33,52].
The PlanetScope imagery, despite its high spatial resolution, reached inferior overall results compared to OLI/Landsat-8 [33]. This result is probably related to the lower temporal and spectral resolution of this dataset and to the training points being collected in a concentrated manner and not spread over the entire area of interest [56]. This combination of factors did not produce enough information to adequately differentiate the classes, suggesting that the spatial resolution of 4.77 m did not provide enough textural information for the effective separation of the classes studied. The results of Planet NICFI images in relation to Sentinel-2 were similar to each other in both the OA of 86.79% and 86.41%, and kappa of 82.06% and 81.26%, respectively. For CA, the results were higher than 87% and for PA higher than 89%.
Classification in a highly heterogeneous and fragmented agricultural region is challenging due to the similarities in reflectance among the second-crop crops as well as the different temporal and spatial resolutions of the images used. Given this situation, the combination of several types of images with spatial and temporal resolution as well as the use of time series images can be seen in past research, in which better results were achieved than those obtained by analyses using separate, once-monthly images [55,57,58].
Based on our studies, the result obtained in this research, especially for Landsat 8 and MODIS Terra images, is very consistent with respect to classification accuracy and suggests the use in further studies of the combination of the aforementioned images with PlanetScope NIFCI and Sentinel-2 images as well as other sensors in order to achieve more refined results than the present study.
The results of this study were obtained due to the open-access remote sensing datasets and the computing power of the GEE platform, which has demonstrated considerable versatility and adaptability due to its integrative capabilities and its efficient platform for scripting in JavaScript and Python. This study was based on the use of open-source technologies with a focus on processes, and these are robust and scalable over large spatial extents; thus, good results were obtained.

4. Conclusions

The evaluations performed showed satisfactory results since the kappa coefficient and OA presented values higher than 80%. Therefore, it is concluded that the GEOBIA methodology, which employed the combination SNIC + GLCM with the random forest classifier, was successful. However, it is important to highlight that there is still room for improvement in the segmentation step in order to make the methodology applicable in large agricultural areas. The use of other techniques with more spectral models as well as using other types of sensors in the geo-object-oriented analysis input could improve our findings. For future work, the use of hyperspectral data to improve identification should be considered.
Furthermore, this study highlighted the overall reliability of the GEOBIA methodology, although its complexity results in higher computational demands. This can affect the execution of the GEE code, especially when using high-spatial-resolution data such as that from Sentinel-2 and PlanetScope.

Author Contributions

Conceptualization, C.A.d.S.J. and I.C.M.J.; methodology, C.A.d.S.J. and I.C.M.J.; software, R.D., C.L.B. and P.E.T.; validation, C.A.d.S.J., I.C.M.J. and L.P.R.T.; formal analysis, F.S.R. and C.A.d.S.J.; investigation, I.C.M.J.; resources, C.A.d.S.J., J.F.d.O.-J. and M.L.; data curation, J.L.D.-S. and F.H.R.B.; writing—original draft preparation, I.C.M.J. and C.A.d.S.J.; writing—review and editing, P.E.T., L.P.R.T., F.S.R. and J.L.D.-S.; visualization, R.D. and J.F.d.O.-J.; supervision, C.A.d.S.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)—Finance Code 001, National Council for Research and Development (CNPq). We would also like to thank the anonymous reviewers for providing insights to improve the manuscript. We are also thankful to the research laboratory of the State University of Mato Grosso (UNEMAT)—https://pesquisa.unemat.br/gaaf/ (accessed on 12 December 2022). Thanks to Fundação de Apoio ao Desenvolvimento do Ensino, Ciência e Tecnologia do Estado de Mato Grosso (FAPEMAT) for the financial support of the research project (0001464/2022 and 000125/2023) and Fundação de Apoio ao Desenvolvimento do Ensino and Ciência e Tecnologia do Estado de Mato Grosso do Sul (FUNDECT), numbers 88/2021 and 07/2022, and SIAFEM numbers 30478 and 31333; and CNPq Research Productivity Scholars (processes 309250/2021-8; 306022/2021-4; 303767/2020-0; 304979/2022-8).

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Conab-Boletim Da Safra de Grãos. Available online: https://www.conab.gov.br/info-agro/safras/graos/boletim-da-safra-de-graos (accessed on 30 October 2022).
  2. Comexstat Exportação e Importação Geral. Available online: http://comexstat.mdic.gov.br/pt/geral/37761 (accessed on 5 July 2021).
  3. CONAB Acompanhamento Da Safra Brasileira. Cia. Nac. Abast. Acompan. Safra Bras. 2022, 7, 1–89.
  4. Bertolin, N.d.O.; Filgueiras, R.; Venancio, L.P.; Mantovani, E.C. Predição da produtividade de milho irrigado com auxílio de imagens de satélite. Rev. Bras. Agric. Irrig. 2017, 11, 1627–1638. [Google Scholar] [CrossRef]
  5. Oldoni, L.V. Mapeamento de Soja e Milho Com Mineração de Dados e Imagens Sintéticas Landsat e MODIS. Dissertação; Universidade Estadual do Oeste do Paraná—Campus de Cascavel: Cascavel, Brazil, 2018. [Google Scholar]
  6. Pino, F.A. IEA-Instituto de Economia Agrícola-Informações Econômicas; IEA: São Paulo, Brazil, 2001; Volume 31, pp. 55–58. [Google Scholar]
  7. Oldoni, L.V.; Sanches, I.D.; Picoli, M.C.A.; Covre, R.M.; Fronza, J.G. LEM+ Dataset: For Agricultural Remote Sensing Applications. Data Brief 2020, 33, 106553. [Google Scholar] [CrossRef]
  8. Prudente, V.H.R.; Martins, V.S.; Vieira, D.C.; Silva, N.R.d.F.; Adami, M.; Sanches, I.D.A. Limitations of Cloud Cover for Optical Remote Sensing of Agricultural Areas across South America. Remote Sens. Appl. 2020, 20, 100414. [Google Scholar] [CrossRef]
  9. Khadim, F.K.; Su, H.; Xu, L.; Tian, J. Soil Salinity Mapping in Everglades National Park Using Remote Sensing Techniques and Vegetation Salt Tolerance. Phys. Chem. Earth 2019, 110, 31–50. [Google Scholar] [CrossRef]
  10. Speranza, E.A.; Grego, C.R.; Gebler, L. Analysis of Pest Incidence on Apple Trees Validated by Unsupervised Machine Learning Algorithms. Rev. Eng. Na Agric. Reveng 2022, 30, 63–74. [Google Scholar] [CrossRef]
  11. de Oliveira, G.; Chen, J.M.; Mataveli, G.A.V.; Chaves, M.E.D.; Seixas, H.T.; da Cardozo, F.S.; Shimabukuro, Y.E.; He, L.; Stark, S.C.; dos Santos, C.A.C. Rapid Recent Deforestation Incursion in a Vulnerable Indigenous Land in the Brazilian Amazon and Fire-Driven Emissions of Fine Particulate Aerosol Pollutants. Forests 2020, 11, 829. [Google Scholar] [CrossRef]
  12. Mollick, T.; Azam, M.G.; Karim, S. Geospatial-Based Machine Learning Techniques for Land Use and Land Cover Mapping Using a High-Resolution Unmanned Aerial Vehicle Image. Remote Sens. Appl. 2023, 29, 100859. [Google Scholar] [CrossRef]
  13. Liu, B.; Song, W. Mapping Abandoned Cropland Using Within-Year Sentinel-2 Time Series. Catena 2023, 223, 106924. [Google Scholar] [CrossRef]
  14. Immitzer, M.; Atzberger, C.; Koukal, T. Tree Species Classification with Random Forest Using Very High Spatial Resolution 8-Band WorldView-2 Satellite Data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef]
  15. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-Scale Geospatial Analysis for Everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  16. Google Earth Engine Google Earth Engine Platform. Available online: https://earthengine.google.com/platform/ (accessed on 21 January 2023).
  17. SIDRA Levantamento Sistemático Da Produção Agrícola-Setembro. 2020. Available online: https://sidra.ibge.gov.br/home/lspa/mato-grosso (accessed on 25 June 2021).
  18. IBGE Cidades e Estados-Instituto Brasileiro de Geografia e Estatística. Available online: https://cidades.ibge.gov.br/brasil/mt/canarana/panorama (accessed on 24 June 2021).
  19. Alves, H.Q.; Rezende, A.C.P.; Sposito, R.d.C. Geoprocessamento Como Ferramenta de Conservação de Recursos Hídricos e de Biodiversidade: Um Estudo de Caso Para o Município de Canarana—MT. An. XIV Simpósio Bras. Sensoriamento Remoto INPE 2009, 14, 3439–3446. [Google Scholar]
  20. Funk, C.; Peterson, P.; Landsfeld, M.; Pedreros, D.; Verdin, J.; Shukla, S.; Husak, G.; Rowland, J.; Harrison, L.; Hoell, A.; et al. The Climate Hazards Infrared Precipitation with Stations—A New Environmental Record for Monitoring Extremes. Sci. Data 2015, 2, 150066. [Google Scholar] [CrossRef] [PubMed]
  21. Embrapa. Embrapa Solos Sistema Brasileiro de Classificação de Solos, 3rd ed.; Santos, H.G.d., Jacomine, P.K.T., Anjos, L.H.C.d., Oliveira, V.Á.d., Lumbreras, J.F., Coelho, M.R., Almeida, J.A.d., Cunha, T.J.F., Oliveira, J.B.d., Eds.; Embrapa: Brasília, Brazil, 2013; ISBN 9788570351982. [Google Scholar]
  22. Ferreira, J.C.V.; de Moura e Silva, J.; Silva, P.P.C.; Alencastro, A. Mato Grosso e Seus Municípios; Editora Buriti: Buriticupú, Brazil, 2001. [Google Scholar]
  23. MapBiomas Mapas de Referência. Available online: https://mapbiomas.org/mapas-de-referencia (accessed on 10 July 2021).
  24. Richter, R.; Kellenberger, T.; Kaufmann, H. Comparison of Topographic Correction Methods. Remote Sens. 2009, 1, 184–196. [Google Scholar] [CrossRef]
  25. Tassi, A.; Gigante, D.; Modica, G.; di Martino, L.; Vizzari, M. Pixel- vs. Object-Based Landsat 8 Data Classification in Google Earth Engine Using Random Forest: The Case Study of Maiella National Park. Remote Sens. 2021, 13, 2299. [Google Scholar] [CrossRef]
  26. Belcore, E.; Piras, M.; Wozniak, E. Specific alpine environment land cover classification methodology: Google earth engine processing for sentinel-2 data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B3-2, 663–670. [Google Scholar] [CrossRef]
  27. Shepherd, J.D.; Dymond, J.R. Correcting Satellite Imagery for the Variance of Reflectance and Illumination with Topography. Int. J. Remote Sens. 2003, 24, 3503–3514. [Google Scholar] [CrossRef]
  28. Ngula Niipele, J.; Chen, J. The Usefulness of Alos-Palsar Dem Data for Drainage Extraction in Semi-Arid Environments in The Iishana Sub-Basin. J. Hydrol. Reg. Stud. 2019, 21, 57–67. [Google Scholar] [CrossRef]
  29. Farr, T.G.; Rosen, P.A.; Caro, E.; Crippen, R.; Duren, R.; Hensley, S.; Kobrick, M.; Paller, M.; Rodriguez, E.; Roth, L.; et al. The Shuttle Radar Topography Mission. Rev. Geophys. 2007, 45, 1–33. [Google Scholar] [CrossRef]
  30. Silva, C.A.; Nanni, M.R.; Teodoro, P.E.; Silva, G.F.C. Vegetation Indices for Discrimination of Soybean Areas: A New Approach. Agron. J. 2017, 109, 1331–1343. [Google Scholar] [CrossRef]
  31. Silva Junior, C.A.d.; Nanni, M.R.; Oliveira-Júnior, J.F.d.; Cezar, E.; Teodoro, P.E.; Delgado, R.C.; Shiratsuchi, L.S.; Shakir, M.; Chicati, M.L. Object-Based Image Analysis Supported by Data Mining to Discriminate Large Areas of Soybean. Int. J. Digit Earth 2019, 12, 270–292. [Google Scholar] [CrossRef]
  32. Rege, A.; Warnekar, S.B.; Lee, J.S.H. Mapping Cashew Monocultures in the Western Ghats Using Optical and Radar Imagery in Google Earth Engine. Remote Sens. Appl. 2022, 28, 100861. [Google Scholar] [CrossRef]
  33. Tassi, A.; Vizzari, M. Object-Oriented LULC Classification in Google Earth Engine Combining SNIC, GLCM, and Machine Learning Algorithms. Remote Sens. 2020, 12, 3776. [Google Scholar] [CrossRef]
  34. Silva Junior, C.A. Estimativa e Discriminação de Áreas de Soja [Glycine max L.] No Estado Do Paraná Com Dados Mono e Multitemporais do Sensor MODIS. Dissertação; Universidade Estadual de Maringá: Maringá, Brazil, 2014. [Google Scholar]
  35. Estornell, J.; Martí-Gavliá, J.M.; Sebastiá, M.T.; Mengual, J. Principal Component Analysis Applied to Remote Sensing. Model. Sci. Educ. Learn. 2013, 6, 83. [Google Scholar] [CrossRef]
  36. Jia, X.; Richards, J.A. Segmented Principal Components Transformation for Efficient Hyperspectral Remote-Sensing Image Display and Classification. IEEE Trans. Geosci. Remote Sens. 1999, 37, 538–542. [Google Scholar] [CrossRef]
  37. Meneses, P.R.; Almeida, T. Introdução ao Processamento de Imagens de Sensoriamento Remoto. UnB-CNPq. Brasília. 2012. Available online: https://edisciplinas.usp.br/pluginfile.php/5550408/mod_resource/content/3/Livro-SensoriamentoRemoto.pdf (accessed on 30 October 2022).
  38. Achanta, R.; Susstrunk, S. Superpixels and Polygons Using Simple Non-Iterative Clustering. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA; pp. 4895–4904. [Google Scholar]
  39. Amani, M.; Kakooei, M.; Moghimi, A.; Ghorbanian, A.; Ranjgar, B.; Mahdavi, S.; Davidson, A.; Fisette, T.; Rollin, P.; Brisco, B.; et al. Application of Google Earth Engine Cloud Computing Platform, Sentinel Imagery, and Neural Networks for Crop Mapping in Canada. Remote Sens. 2020, 12, 3561. [Google Scholar] [CrossRef]
  40. Luo, C.; Qi, B.; Liu, H.; Guo, D.; Lu, L.; Fu, Q.; Shao, Y. Using Time Series Sentinel-1 Images for Object-Oriented Crop Classification in Google Earth Engine. Remote Sens. 2021, 13, 561. [Google Scholar] [CrossRef]
  41. Cheng, X.; Liu, W.; Zhou, J.; Wang, Z.; Zhang, S.; Liao, S. Extraction of Mountain Grasslands in Yunnan, China, from Sentinel-2 Data during the Optimal Phenological Period Using Feature Optimization. Agronomy 2022, 12, 1948. [Google Scholar] [CrossRef]
  42. Chaves, M.E.D.; Picoli, M.C.A.; Sanches, I.D. Recent Applications of Landsat 8/OLI and Sentinel-2/MSI for Land Use and Land Cover Mapping: A Systematic Review. Remote Sens. 2020, 12, 3062. [Google Scholar] [CrossRef]
  43. Wessels, K.J.; Bergh, F.V.D.; Roy, D.P.; Salmon, B.P.; Steenkamp, K.C.; MacAlister, B.; Swanepoel, D.; Jewitt, D. Rapid Land Cover Map Updates Using Change Detection and Robust Random Forest Classifiers. Remote Sens. 2016, 8, 888. [Google Scholar] [CrossRef]
  44. Castillejo-González, I.L.; López-Granados, F.; García-Ferrer, A.; Peña-Barragán, J.M.; Jurado-Expósito, M.; de la Orden, M.S.; González-Audicana, M. Object- and Pixel-Based Analysis for Mapping Crops and Their Agro-Environmental Associated Measures Using QuickBird Imagery. Comput. Electron. Agric. 2009, 68, 207–215. [Google Scholar] [CrossRef]
  45. Xu, R. Mapping Rural Settlements from Landsat and Sentinel Time Series by Integrating Pixel-and Object-Based Methods. Land 2021, 10, 244. [Google Scholar] [CrossRef]
  46. da Silva Junior, C.A.; Moreira, E.P.; Frank, T.; Moreira, M.A.; Barcellos, D. Comparação de Áreas de Soja (Glycinemax (L.) Merr.) Obtidas Por Meio Da Interpretação de Imagens TM/Landsat e MODIS/Terra No Município de Maracaju (MS) = Comparison of Areas of Soybean (Glycine Max (L) Merr.) Obtained through the Interpretation. Biosci. J. 2014, 30, 707–716. [Google Scholar]
  47. Silva, C.O. Da Geoprocessamento Aplicado ao Zoneamento Agrícola Para cana-de-Açúcar Irrigada do Estado do Piau; Faculdade de Ciências Agronômicas da UNESP: Botucatu, Brazil, 2016. [Google Scholar]
  48. Manzatto, C.V.; Assad, E.D.; Bacca, J.F.M.; Zaroni, M.J.; Pereira, N.R. Zoneamento Agroecológico da Cana-de-Açúcar: Expandir a produção, preservar a vida, garantir o futuro. Embrapa Solos. 2009. Available online: https://ainfo.cnptia.embrapa.br/digital/bitstream/CNPS-2010/14408/1/ZonCana.pdf (accessed on 30 October 2022).
  49. Garcia, Y.M.; Campos, S.; Tagliarini, F.S.N.; Campos, M.; Rodrigues, B.T. Declividade e potencial para mecanização agrícola da bacia hidrográfica do ribeirão pederneiras-pederneiras/sp. Rev. Bras. Eng. Biossistemas 2020, 14, 62–72. [Google Scholar] [CrossRef]
  50. Aneece, I.; Thenkabail, P. Accuracies Achieved in Classifying Five Leading World Crop Types and Their Growth Stages Using Optimal Earth Observing-1 Hyperion Hyperspectral Narrowbands on Google Earth Engine. Remote Sens. 2018, 10, 2027. [Google Scholar] [CrossRef]
  51. Zhang, C.; Li, X.; Wu, M.; Qin, W.; Zhang, J. Object-Oriented Classification of Land Cover Based on Landsat 8 OLI Image Data in the Kunyu Mountain. Sci. Geogr. Sin. 2018, 38, 1904–1913. [Google Scholar]
  52. Ruiz, L.F.C.; Guasselli, L.A.; Simioni, J.P.D.; Belloli, T.F.; Barros Fernandes, P.C. Object-Based Classification of Vegetation Species in a Subtropical Wetland Using Sentinel-1 and Sentinel-2A Images. Sci. Remote Sens. 2021, 3, 100017. [Google Scholar] [CrossRef]
  53. Stromann, O.; Nascetti, A.; Yousif, O.; Ban, Y. Dimensionality Reduction and Feature Selection for Object-Based Land Cover Classification Based on Sentinel-1 and Sentinel-2 Time Series Using Google Earth Engine. Remote Sens. 2019, 12, 76. [Google Scholar] [CrossRef]
  54. Maxwell, A.E.; Strager, M.P.; Warner, T.A.; Ramezan, C.A.; Morgan, A.N.; Pauley, C.E. Large-Area, High Spatial Resolution Land Cover Mapping Using Random Forests, GEOBIA, and NAIP Orthophotography: Findings and Recommendations. Remote Sens. 2019, 11, 1409. [Google Scholar] [CrossRef]
  55. El Imanni, H.S.; El Harti, A.; Hssaisoune, M.; Velastegui-Montoya, A.; Elbouzidi, A.; Addi, M.; El Iysaouy, L.; El Hachimi, J. Rapid and Automated Approach for Early Crop Mapping Using Sentinel-1 and Sentinel-2 on Google Earth Engine; A Case of a Highly Heterogeneous and Fragmented Agricultural Region. J. Imaging 2022, 8, 316. [Google Scholar] [CrossRef]
  56. Della-Silva, J.L.; da Silva Junior, C.A.; Lima, M.; Teodoro, P.E.; Nanni, M.R.; Shiratsuchi, L.S.; Teodoro, L.P.R.; Capristo-Silva, G.F.; Baio, F.H.R.; de Oliveira, G.; et al. CO2Flux Model Assessment and Comparison between an Airborne Hyperspectral Sensor and Orbital Multispectral Imagery in Southern Amazonia. Sustainability 2022, 14, 5458. [Google Scholar] [CrossRef]
  57. Inglada, J.; Vincent, A.; Arias, M.; Marais-Sicre, C. Improved Early Crop Type Identification By Joint Use of High Temporal Resolution SAR And Optical Image Time Series. Remote Sens. 2016, 8, 362. [Google Scholar] [CrossRef]
  58. Luo, C.; Liu, H.; Lu, L.; Liu, Z.; Kong, F.; Zhang, X. Monthly Composites from Sentinel-1 and Sentinel-2 Images for Regional Major Crop Mapping with Google Earth Engine. J. Integr. Agric. 2021, 20, 1944–1957. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the object-oriented classification methodology.
Figure 1. Flowchart of the object-oriented classification methodology.
Agriengineering 06 00030 g001
Figure 2. Location of study area in Canarana municipality, Mato Grosso state, presented by using the normalized difference vegetation index (NDVI).
Figure 2. Location of study area in Canarana municipality, Mato Grosso state, presented by using the normalized difference vegetation index (NDVI).
Agriengineering 06 00030 g002
Figure 3. Land-use and land-cover sample’s location at Canarana-MT.
Figure 3. Land-use and land-cover sample’s location at Canarana-MT.
Agriengineering 06 00030 g003
Figure 4. PC analysis mosaicking for (A) OLI/Landsat-8, (B) MODIS Terra, (C) Planet NICFI, and (D) MSI/Sentinel-2.
Figure 4. PC analysis mosaicking for (A) OLI/Landsat-8, (B) MODIS Terra, (C) Planet NICFI, and (D) MSI/Sentinel-2.
Agriengineering 06 00030 g004
Figure 5. Accuracy test with different quantities of decision trees in the random forest classification process in each imagery system considered: (A) OLI/Landsat-8, (B) MODIS Terra, (C) Planet NICFI, and (D) MSI/Sentinel-2.
Figure 5. Accuracy test with different quantities of decision trees in the random forest classification process in each imagery system considered: (A) OLI/Landsat-8, (B) MODIS Terra, (C) Planet NICFI, and (D) MSI/Sentinel-2.
Agriengineering 06 00030 g005
Figure 6. Land-use and land-cover classification based on GEOBIA and random forest for each considered sensor: (A) OLI/Landsat-8, (B) MODIS (C) Planet NICFI, and (D) MSI/Sentinel-2.
Figure 6. Land-use and land-cover classification based on GEOBIA and random forest for each considered sensor: (A) OLI/Landsat-8, (B) MODIS (C) Planet NICFI, and (D) MSI/Sentinel-2.
Agriengineering 06 00030 g006
Figure 7. Classified second-crop maize areas clip: (A) OLI/Landsat-8, (B) MODIS, (C) Planet NICFI, and (D) MSI/Sentinel-2.
Figure 7. Classified second-crop maize areas clip: (A) OLI/Landsat-8, (B) MODIS, (C) Planet NICFI, and (D) MSI/Sentinel-2.
Agriengineering 06 00030 g007
Figure 8. Confusion matrix for OLI/Landsat-8 imagery.
Figure 8. Confusion matrix for OLI/Landsat-8 imagery.
Agriengineering 06 00030 g008
Figure 9. Confusion matrix for MODIS imagery.
Figure 9. Confusion matrix for MODIS imagery.
Agriengineering 06 00030 g009
Figure 10. Confusion matrix for Planet NICFI imagery.
Figure 10. Confusion matrix for Planet NICFI imagery.
Agriengineering 06 00030 g010
Figure 11. Confusion matrix for MSI/Sentinel-2.
Figure 11. Confusion matrix for MSI/Sentinel-2.
Agriengineering 06 00030 g011
Table 1. Principal component variation in each sensor.
Table 1. Principal component variation in each sensor.
PCOLI/Landsat-8MODISPlanetScopeMSI/Sentinel-2
PC0197.02%94.52%98.22%95.28%
PC022.65%4.31%1.52%4.51%
PC030.29%1.08%0.18%0.15%
PC040.03%0.04%0.04%0.03%
PC050.01%0.03%0.03%0.02%
PC060.00%0.02%0.01%0.01%
PC070.00%0.00%0.00%0.00%
PC080.00%0.00%0.00%0.00%
PC090.00%0.00%0.00%0.00%
PC100.00%0.00%0.00%0.00%
PC110.00%0.00%0.00%0.00%
PC120.00%−0.00%0.00%0.00%
PC130.00%−0.00%0.00%0.00%
PC14−0.00%−0.00%0.00%−0.00%
PC15−0.00%−0.00%0.00%−0.00%
PC16−0.00%−0.00%−0.00%−0.00%
Table 2. Overall accuracy (OA), kappa coefficient, and classified maize crop area (ha) for each sensor image classification.
Table 2. Overall accuracy (OA), kappa coefficient, and classified maize crop area (ha) for each sensor image classification.
Landsat-8MODISPlanetSentinel-2
Overall accuracy88.65%86.83%86.79%86.41%
Kappa coefficient84.61%82.01%82.06%81.26%
Second-harvest maize area (ha)450,766.60424,715.59329,557.85432,422.91
Table 3. Analysis of variance (ANOVA) and means multiple comparison by Tukey’s test at 5% probability for the overall accuracy and kappa coefficient results.
Table 3. Analysis of variance (ANOVA) and means multiple comparison by Tukey’s test at 5% probability for the overall accuracy and kappa coefficient results.
Analysis of Variance
VariableNMeanSDSE95% ConfInterval
MODIS284.423.422.4253.72115.12
Landsat-8286.632.862.0260.95112.31
Planet284.433.352.3754.37114.49
Sentinel-2283.843.642.5751.18116.50
Tukey’s test HSD
Group 1Group2MeandiffP-adjLowerUpperInterval
Landsat-8MODIS−2.20980.9−15.753611.3339False
Landsat-8Planet−2.20370.9−15.747411.34False
Landsat-8Sentinel-2−2.79290.8225−16.336710.7508False
MODISPlanet0.00610.9−13.537613.5498False
MODISSentinel-2−0.58310.9−14.126812.9606False
PlanetSentinel-2−0.58920.9−14.132912.9545False
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Maciel Junior, I.C.; Dallacort, R.; Boechat, C.L.; Teodoro, P.E.; Teodoro, L.P.R.; Rossi, F.S.; Oliveira-Júnior, J.F.d.; Della-Silva, J.L.; Baio, F.H.R.; Lima, M.; et al. Maize Crop Detection through Geo-Object-Oriented Analysis Using Orbital Multi-Sensors on the Google Earth Engine Platform. AgriEngineering 2024, 6, 491-508. https://0-doi-org.brum.beds.ac.uk/10.3390/agriengineering6010030

AMA Style

Maciel Junior IC, Dallacort R, Boechat CL, Teodoro PE, Teodoro LPR, Rossi FS, Oliveira-Júnior JFd, Della-Silva JL, Baio FHR, Lima M, et al. Maize Crop Detection through Geo-Object-Oriented Analysis Using Orbital Multi-Sensors on the Google Earth Engine Platform. AgriEngineering. 2024; 6(1):491-508. https://0-doi-org.brum.beds.ac.uk/10.3390/agriengineering6010030

Chicago/Turabian Style

Maciel Junior, Ismael Cavalcante, Rivanildo Dallacort, Cácio Luiz Boechat, Paulo Eduardo Teodoro, Larissa Pereira Ribeiro Teodoro, Fernando Saragosa Rossi, José Francisco de Oliveira-Júnior, João Lucas Della-Silva, Fabio Henrique Rojo Baio, Mendelson Lima, and et al. 2024. "Maize Crop Detection through Geo-Object-Oriented Analysis Using Orbital Multi-Sensors on the Google Earth Engine Platform" AgriEngineering 6, no. 1: 491-508. https://0-doi-org.brum.beds.ac.uk/10.3390/agriengineering6010030

Article Metrics

Back to TopTop