remotesensing-logo

Journal Browser

Journal Browser

Recent Advances for Crop Mapping and Monitoring Using Remote Sensing Data

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Environmental Remote Sensing".

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 50676

Special Issue Editors


E-Mail Website
Guest Editor
Department of Geographical Sciences, University of Maryland, College Park, MD 20742, USA
Interests: physical, statistical and machine learning approaches for modeling of agricultural and environmental
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Geographical Sciences, University of Maryland, College Park, MD 20782, USA
Interests: environmental science; agriculture; biofuels; carbon; land use
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Geographical Sciences, University of Maryland, College Park, MD, USA
Interests: machine learning; deep learning; artificial intelligence; crop type mapping; cropland mapping; remote sensing; Earth science; food security
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recent years have seen rapid advancements in the use of remote sensing data for agricultural applications. This progress has been achieved because of engineering advances in satellite sensors and the development of open data policies from satellite data providers. Concurrently, advances in data processing have yielded significant advances in modeling and mapping approaches, leading to more robust algorithms. All of these advances have encouraged different sectors from government agencies and policy makers to private industry to include remote sensing data in their agricultural decision support systems. 

This Special Issue solicits papers that document recent advances in remote sensing applications in agriculture, including crop type mapping, crop water stress and crop disease monitoring, crop yield prediction, crop biophysical parameter estimation, cover crop mapping, and crop residue monitoring using remote sensing data. Research papers that use advanced remote sensing techniques such as multiresolution data fusion, SAR and optical data integration, SAR polarimetry, and SAR interferometry are welcome. We also encourage manuscripts that focus on advanced modeling approaches such as new methods in machine learning/artificial intelligence or their integration with physical models. 

Dr. Mehdi Hosseini
Dr. Ritvik Sahajpal
Dr. Hannah Kerner
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Cropland and crop type mapping
  • Machine learning
  • Crop yield and condition forecasting
  • SAR and optical data fusion
  • Conservation practice mapping
  • SAR polarimetry
  • interferometry

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

29 pages, 8463 KiB  
Article
Potential of Ultra-High-Resolution UAV Images with Centimeter GNSS Positioning for Plant Scale Crop Monitoring
by Jean-Marc Gilliot, Dalila Hadjar and Joël Michelin
Remote Sens. 2022, 14(10), 2391; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14102391 - 16 May 2022
Cited by 5 | Viewed by 3630
Abstract
To implement agricultural practices that are more respectful of the environment, precision agriculture methods for monitoring crop heterogeneity are becoming more and more spatially detailed. The objective of this study was to evaluate the potential of Ultra-High-Resolution UAV images with centimeter GNSS positioning [...] Read more.
To implement agricultural practices that are more respectful of the environment, precision agriculture methods for monitoring crop heterogeneity are becoming more and more spatially detailed. The objective of this study was to evaluate the potential of Ultra-High-Resolution UAV images with centimeter GNSS positioning for plant-scale monitoring. A Dji Phantom 4 RTK UAV with a 20 MPixel RGB camera was used, flying at an altitude of 25 m (0.7 cm resolution). This study was conducted on an experimental plot sown with maize. A centimeter-precision Trimble Geo7x GNSS receiver was used for the field measurements. After evaluating the precision of the UAV’s RTK antenna in static mode on the ground, the positions of 17 artificial targets and 70 maize plants were measured during a series of flights in different RTK modes. Agisoft Metashape software was used. The error in position of the UAV RTK antenna in static mode on the ground was less than one centimeter, in terms of both planimetry and elevation. The horizontal position error measured in flight on the 17 targets was less than 1.5 cm, while it was 2.9 cm in terms of elevation. Finally, according to the RTK modes, at least 81% of the corn plants were localized to within 5 cm of their position, and 95% to within 10 cm. Full article
Show Figures

Graphical abstract

27 pages, 44493 KiB  
Article
Integration of Satellite-Based Optical and Synthetic Aperture Radar Imagery to Estimate Winter Cover Crop Performance in Cereal Grasses
by Jyoti S. Jennewein, Brian T. Lamb, W. Dean Hively, Alison Thieme, Resham Thapa, Avi Goldsmith and Steven B. Mirsky
Remote Sens. 2022, 14(9), 2077; https://doi.org/10.3390/rs14092077 - 26 Apr 2022
Cited by 13 | Viewed by 3540
Abstract
The magnitude of ecosystem services provided by winter cover crops is linked to their performance (i.e., biomass and associated nitrogen content, forage quality, and fractional ground cover), although few studies quantify these characteristics across the landscape. Remote sensing can produce landscape-level assessments of [...] Read more.
The magnitude of ecosystem services provided by winter cover crops is linked to their performance (i.e., biomass and associated nitrogen content, forage quality, and fractional ground cover), although few studies quantify these characteristics across the landscape. Remote sensing can produce landscape-level assessments of cover crop performance. However, commonly employed optical vegetation indices (VI) saturate, limiting their ability to measure high-biomass cover crops. Contemporary VIs that employ red-edge bands have been shown to be more robust to saturation issues. Additionally, synthetic aperture radar (SAR) data have been effective at estimating crop biophysical characteristics, although this has not been demonstrated on winter cover crops. We assessed the integration of optical (Sentinel-2) and SAR (Sentinel-1) imagery to estimate winter cover crops biomass across 27 fields over three winter–spring seasons (2018–2021) in Maryland. We used log-linear models to predict cover crop biomass as a function of 27 VIs and eight SAR metrics. Our results suggest that the integration of the normalized difference red-edge vegetation index (NDVI_RE1; employing Sentinel-2 bands 5 and 8A), combined with SAR interferometric (InSAR) coherence, best estimated the biomass of cereal grass cover crops. However, these results were season- and species-specific (R2 = 0.74, 0.81, and 0.34; RMSE = 1227, 793, and 776 kg ha−1, for wheat (Triticum aestivum L.), triticale (Triticale hexaploide L.), and cereal rye (Secale cereale), respectively, in spring (March–May)). Compared to the optical-only model, InSAR coherence improved biomass estimations by 4% in wheat, 5% in triticale, and by 11% in cereal rye. Both optical-only and optical-SAR biomass prediction models exhibited saturation occurring at ~1900 kg ha−1; thus, more work is needed to enable accurate biomass estimations past the point of saturation. To address this continued concern, future work could consider the use of weather and climate variables, machine learning models, the integration of proximal sensing and satellite observations, and/or the integration of process-based crop-soil simulation models and remote sensing observations. Full article
Show Figures

Figure 1

20 pages, 2762 KiB  
Article
CCTNet: Coupled CNN and Transformer Network for Crop Segmentation of Remote Sensing Images
by Hong Wang, Xianzhong Chen, Tianxiang Zhang, Zhiyong Xu and Jiangyun Li
Remote Sens. 2022, 14(9), 1956; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14091956 - 19 Apr 2022
Cited by 52 | Viewed by 5021
Abstract
Semantic segmentation by using remote sensing images is an efficient method for agricultural crop classification. Recent solutions in crop segmentation are mainly deep-learning-based methods, including two mainstream architectures: Convolutional Neural Networks (CNNs) and Transformer. However, these two architectures are not sufficiently good for [...] Read more.
Semantic segmentation by using remote sensing images is an efficient method for agricultural crop classification. Recent solutions in crop segmentation are mainly deep-learning-based methods, including two mainstream architectures: Convolutional Neural Networks (CNNs) and Transformer. However, these two architectures are not sufficiently good for the crop segmentation task due to the following three reasons. First, the ultra-high-resolution images need to be cut into small patches before processing, which leads to the incomplete structure of different categories’ edges. Second, because of the deficiency of global information, categories inside the crop field may be wrongly classified. Third, to restore complete images, the patches need to be spliced together, causing the edge artifacts and small misclassified objects and holes. Therefore, we proposed a novel architecture named the Coupled CNN and Transformer Network (CCTNet), which combines the local details (e.g., edge and texture) by the CNN and global context by Transformer to cope with the aforementioned problems. In particular, two modules, namely the Light Adaptive Fusion Module (LAFM) and the Coupled Attention Fusion Module (CAFM), are also designed to efficiently fuse these advantages. Meanwhile, three effective methods named Overlapping Sliding Window (OSW), Testing Time Augmentation (TTA), and Post-Processing (PP) are proposed to remove small objects and holes embedded in the inference stage and restore complete images. The experimental results evaluated on the Barley Remote Sensing Dataset present that the CCTNet outperformed the single CNN or Transformer methods, achieving 72.97% mean Intersection over Union (mIoU) scores. As a consequence, it is believed that the proposed CCTNet can be a competitive method for crop segmentation by remote sensing images. Full article
Show Figures

Figure 1

15 pages, 853 KiB  
Article
Spatial-Temporal Neural Network for Rice Field Classification from SAR Images
by Yang-Lang Chang, Tan-Hsu Tan, Tsung-Hau Chen, Joon Huang Chuah, Lena Chang, Meng-Che Wu, Narendra Babu Tatini, Shang-Chih Ma and Mohammad Alkhaleefah
Remote Sens. 2022, 14(8), 1929; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14081929 - 16 Apr 2022
Cited by 9 | Viewed by 2998
Abstract
Agriculture is an important regional economic industry in Asian regions. Ensuring food security and stabilizing the food supply are a priority. In response to the frequent occurrence of natural disasters caused by global warming in recent years, the Agriculture and Food Agency (AFA) [...] Read more.
Agriculture is an important regional economic industry in Asian regions. Ensuring food security and stabilizing the food supply are a priority. In response to the frequent occurrence of natural disasters caused by global warming in recent years, the Agriculture and Food Agency (AFA) in Taiwan has conducted agricultural and food surveys to address those issues. To improve the accuracy of agricultural and food surveys, AFA uses remote sensing technology to conduct surveys on the planting area of agricultural crops. Unlike optical images that are easily disturbed by rainfall and cloud cover, synthetic aperture radar (SAR) images will not be affected by climatic factors, which makes them more suitable for the forecast of crops production. This research proposes a novel spatial-temporal neural network called a convolutional long short-term memory rice field classifier (ConvLSTM-RFC) for rice field classification from Sentinel-1A SAR images of Yunlin and Chiayi counties in Taiwan. The proposed model ConvLSTM-RFC is implemented with multiple convolutional long short-term memory attentions blocks (ConvLSTM Att Block) and a bi-tempered logistic loss function (BiTLL). Moreover, a convolutional block attention module (CBAM) was added to the residual structure of the ConvLSTM Att Block to focus on rice detection in different periods on SAR images. The experimental results of the proposed model ConvLSTM-RFC have achieved the highest accuracy of 98.08% and the rice false positive is as low as 15.08%. The results indicate that the proposed ConvLSTM-RFC produces the highest area under curve (AUC) value of 88% compared with other related models. Full article
Show Figures

Figure 1

18 pages, 6265 KiB  
Article
Scattering Intensity Analysis and Classification of Two Types of Rice Based on Multi-Temporal and Multi-Mode Simulated Compact Polarimetric SAR Data
by Xianyu Guo, Junjun Yin, Kun Li, Jian Yang and Yun Shao
Remote Sens. 2022, 14(7), 1644; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14071644 - 29 Mar 2022
Cited by 3 | Viewed by 1694
Abstract
Because transmitting polarization can be an arbitrary elliptical wave, and theoretically, there are numerous possibilities of hybrid dual-pol modes, therefore, it is necessary to explore the feature recognition and classification ability of compact polarimetric (CP) parameters under different transmitting and receiving modes to [...] Read more.
Because transmitting polarization can be an arbitrary elliptical wave, and theoretically, there are numerous possibilities of hybrid dual-pol modes, therefore, it is necessary to explore the feature recognition and classification ability of compact polarimetric (CP) parameters under different transmitting and receiving modes to different ground objects. In this paper, we first simulated, extracted, and analyzed the scattering intensity of two types of rice of six temporal CP synthetic aperture radar (SAR) data under three transmitting modes. Then, during different phenology stages, the optimal parameters for distinguishing transplanting hybrid rice (T–H) and direct-sown japonica rice (D–J) were acquired. Finally, a decision tree classification model was established based on the optimal parameters to carry out the fine classification of the two types of rice and to verify the results. The results showed that this strategy can obtain a high classification accuracy for the two types of rice with an overall classification accuracy of more than 95% and a kappa coefficient of more than 0.94. In addition, and importantly, we found that the CP parameters in the 1103 period (harvest stage) were the best CP parameters to distinguish the two types of rice, followed by the 0730 (seedling–elongation stage), 0612 (seedling stage), and 0916 (heading–flowering stage) periods. Full article
Show Figures

Figure 1

24 pages, 7234 KiB  
Article
Identification of Crop Type Based on C-AENN Using Time Series Sentinel-1A SAR Data
by Zhengwei Guo, Wenwen Qi, Yabo Huang, Jianhui Zhao, Huijin Yang, Voon-Chet Koo and Ning Li
Remote Sens. 2022, 14(6), 1379; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14061379 - 12 Mar 2022
Cited by 15 | Viewed by 3292
Abstract
Crop type identification is the initial stage and an important part of the agricultural monitoring system. It is well known that synthetic aperture radar (SAR) Sentinel-1A imagery provides a reliable data source for crop type identification. However, a single-temporal SAR image does not [...] Read more.
Crop type identification is the initial stage and an important part of the agricultural monitoring system. It is well known that synthetic aperture radar (SAR) Sentinel-1A imagery provides a reliable data source for crop type identification. However, a single-temporal SAR image does not contain enough features, and the unique physical characteristics of radar images are relatively lacking, which limits its potential in crop mapping. In addition, current methods may not be applicable for time-series SAR data. To address the above issues, a new crop type identification method was proposed. Specifically, a farmland mask was firstly generated by the object Markov random field (OMRF) model to remove the interference of non-farmland factors. Then, the features of the standard backscatter coefficient, Sigma-naught (σ0), and the normalized backscatter coefficient by the incident angle, Gamma-naught (γ0), were extracted for each type of crop, and the optimal feature combination was found from time-series SAR images by means of Jeffries-Matusita (J-M) distance analysis. Finally, to make efficient utilization of optimal multi-temporal feature combination, a new network, the convolutional-autoencoder neural network (C-AENN), was developed for the crop type identification task. In order to prove the effectiveness of the method, several classical machine learning methods such as support vector machine (SVM), random forest (RF), etc., and deep learning methods such as one dimensional convolutional neural network (1D-CNN) and stacked auto-encoder (SAE), etc., were used for comparison. In terms of quantitative assessment, the proposed method achieved the highest accuracy, with a macro-F1 score of 0.9825, an overall accuracy (OA) score of 0.9794, and a Kappa coefficient (Kappa) score of 0.9705. In terms of qualitative assessment, four typical regions were chosen for intuitive comparison with the sample maps, and the identification result covering the study area was compared with a contemporaneous optical image, which indicated the high accuracy of the proposed method. In short, this study enables the effective identification of crop types, which demonstrates the importance of multi-temporal radar images in feature combination and the necessity of deep learning networks to extract complex features. Full article
Show Figures

Figure 1

18 pages, 8641 KiB  
Article
Mapping the Northern Limit of Double Cropping Using a Phenology-Based Algorithm and Google Earth Engine
by Yan Guo, Haoming Xia, Li Pan, Xiaoyang Zhao and Rumeng Li
Remote Sens. 2022, 14(4), 1004; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14041004 - 18 Feb 2022
Cited by 28 | Viewed by 3307
Abstract
Double cropping is an important cropping system in China, with more than half of China’s cropland adopting the practice. Under the background of global climate change, agricultural policies, and changing farming practices, double-cropping area has changed substantially. However, the spatial-temporal dynamics of double [...] Read more.
Double cropping is an important cropping system in China, with more than half of China’s cropland adopting the practice. Under the background of global climate change, agricultural policies, and changing farming practices, double-cropping area has changed substantially. However, the spatial-temporal dynamics of double cropping is poorly understood. A better understanding of these dynamics is necessary for the northern limit of double cropping (NLDC) to ensure food security in China and the world and to achieve zero hunger, the second Sustainable Development Goal (SDG). Here, we developed a phenology-based algorithm to identify double-cropping fields by analyzing time-series Moderate Resolution Imaging Spectroradiometer (MODIS) images during the period 2000–2020 using the Google Earth Engine (GEE) platform. We then extracted the NLDC using the kernel density of pixels with double cropping and analyzed the spatial-temporal dynamics of NLDC using the Fishnet method. We found that our algorithm accurately extracted double-cropping fields, with overall, user, and producer accuracies and Kappa coefficients of 95.97%, 96.58%, 92.21%, and 0.91, respectively. Over the past 20 years, the NLDC generally trended southward (the largest movement was 66.60 km) and eastward (the largest movement was 109.52 km). Our findings provide the scientific basis for further development and planning of agricultural production in China. Full article
Show Figures

Graphical abstract

20 pages, 19015 KiB  
Article
A Comparison between Support Vector Machine and Water Cloud Model for Estimating Crop Leaf Area Index
by Mehdi Hosseini, Heather McNairn, Scott Mitchell, Laura Dingle Robertson, Andrew Davidson, Nima Ahmadian, Avik Bhattacharya, Erik Borg, Christopher Conrad, Katarzyna Dabrowska-Zielinska, Diego de Abelleyra, Radoslaw Gurdak, Vineet Kumar, Nataliia Kussul, Dipankar Mandal, Y. S. Rao, Nicanor Saliendra, Andrii Shelestov, Daniel Spengler, Santiago R. Verón, Saeid Homayouni and Inbal Becker-Reshefadd Show full author list remove Hide full author list
Remote Sens. 2021, 13(7), 1348; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13071348 - 01 Apr 2021
Cited by 23 | Viewed by 5929
Abstract
The water cloud model (WCM) can be inverted to estimate leaf area index (LAI) using the intensity of backscatter from synthetic aperture radar (SAR) sensors. Published studies have demonstrated that the WCM can accurately estimate LAI if the model is effectively calibrated. However, [...] Read more.
The water cloud model (WCM) can be inverted to estimate leaf area index (LAI) using the intensity of backscatter from synthetic aperture radar (SAR) sensors. Published studies have demonstrated that the WCM can accurately estimate LAI if the model is effectively calibrated. However, calibration of this model requires access to field measures of LAI as well as soil moisture. In contrast, machine learning (ML) algorithms can be trained to estimate LAI from satellite data, even if field moisture measures are not available. In this study, a support vector machine (SVM) was trained to estimate the LAI for corn, soybeans, rice, and wheat crops. These results were compared to LAI estimates from the WCM. To complete this comparison, in situ and satellite data were collected from seven Joint Experiment for Crop Assessment and Monitoring (JECAM) sites located in Argentina, Canada, Germany, India, Poland, Ukraine and the United States of America (U.S.A.). The models used C-Band backscatter intensity for two polarizations (like-polarization (VV) and cross-polarization (VH)) acquired by the RADARSAT-2 and Sentinel-1 SAR satellites. Both the WCM and SVM models performed well in estimating the LAI of corn. For the SVM, the correlation (R) between estimated LAI for corn and LAI measured in situ was reported as 0.93, with a root mean square error (RMSE) of 0.64 m2m2 and mean absolute error (MAE) of 0.51 m2m2. The WCM produced an R-value of 0.89, with only slightly higher errors (RMSE of 0.75 m2m2 and MAE of 0.61 m2m2) when estimating corn LAI. For rice, only the SVM model was tested, given the lack of soil moisture measures for this crop. In this case, both high correlations and low errors were observed in estimating the LAI of rice using SVM (R of 0.96, RMSE of 0.41 m2m2 and MAE of 0.30 m2m2). However, the results demonstrated that when the calibration points were limited (in this case for soybeans), the WCM outperformed the SVM model. This study demonstrates the importance of testing different modeling approaches over diverse agro-ecosystems to increase confidence in model performance. Full article
Show Figures

Graphical abstract

19 pages, 4814 KiB  
Article
High-Dimensional Satellite Image Compositing and Statistics for Enhanced Irrigated Crop Mapping
by Michael J. Wellington and Luigi J. Renzullo
Remote Sens. 2021, 13(7), 1300; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13071300 - 29 Mar 2021
Cited by 8 | Viewed by 3277
Abstract
Accurate irrigated area maps remain difficult to generate, as smallholder irrigation schemes often escape detection. Efforts to map smallholder irrigation have often relied on complex classification models fitted to temporal image stacks. The use of high-dimensional geometric median composites (geomedians) and high-dimensional statistics [...] Read more.
Accurate irrigated area maps remain difficult to generate, as smallholder irrigation schemes often escape detection. Efforts to map smallholder irrigation have often relied on complex classification models fitted to temporal image stacks. The use of high-dimensional geometric median composites (geomedians) and high-dimensional statistics of time-series may simplify classification models and enhance accuracy. High-dimensional statistics for temporal variation, such as the spectral median absolute deviation, indicate spectral variability within a period contributing to a geomedian. The Ord River Irrigation Area was used to validate Digital Earth Australia’s annual geomedian and temporal variation products. Geomedian composites and the spectral median absolute deviation were then calculated on Sentinel-2 images for three smallholder irrigation schemes in Matabeleland, Zimbabwe, none of which were classified as areas equipped for irrigation in AQUASTAT’s Global Map of Irrigated Areas. Supervised random forest classification was applied to all sites. For the three Matabeleland sites, the average Kappa coefficient was 0.87 and overall accuracy was 95.9% on validation data. This compared with 0.12 and 77.2%, respectively, for the Food and Agriculture Organisation’s Water Productivity through Open access of Remotely sensed derived data (WaPOR) land use classification map. The spectral median absolute deviation was ranked among the most important variables across all models based on mean decrease in accuracy. Change detection capacity also means the spectral median absolute deviation has some advantages for cropland mapping over indices such as the Normalized Difference Vegetation Index. The method demonstrated shows potential to be deployed across countries and regions where smallholder irrigation schemes account for large proportions of irrigated area. Full article
Show Figures

Graphical abstract

23 pages, 4682 KiB  
Article
Crop Biomass Mapping Based on Ecosystem Modeling at Regional Scale Using High Resolution Sentinel-2 Data
by Liming He, Rong Wang, Georgy Mostovoy, Jane Liu, Jing M. Chen, Jiali Shang, Jiangui Liu, Heather McNairn and Jarrett Powers
Remote Sens. 2021, 13(4), 806; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13040806 - 22 Feb 2021
Cited by 10 | Viewed by 4046
Abstract
We evaluate the potential of using a process-based ecosystem model (BEPS) for crop biomass mapping at 20 m resolution over the research site in Manitoba, western Canada driven by spatially explicit leaf area index (LAI) retrieved from Sentinel-2 spectral reflectance throughout the entire [...] Read more.
We evaluate the potential of using a process-based ecosystem model (BEPS) for crop biomass mapping at 20 m resolution over the research site in Manitoba, western Canada driven by spatially explicit leaf area index (LAI) retrieved from Sentinel-2 spectral reflectance throughout the entire growing season. We find that overall, the BEPS-simulated crop gross primary production (GPP), net primary production (NPP), and LAI time-series can explain 82%, 83%, and 85%, respectively, of the variation in the above-ground biomass (AGB) for six selected annual crops, while an application of individual crop LAI explains only 50% of the variation in AGB. The linear relationships between the AGB and these three indicators (GPP, NPP and LAI time-series) are rather high for the six crops, while the slopes of the regression models vary for individual crop type, indicating the need for calibration of key photosynthetic parameters and carbon allocation coefficients. This study demonstrates that accumulated GPP and NPP derived from an ecosystem model, driven by Sentinel-2 LAI data and abiotic data, can be effectively used for crop AGB mapping; the temporal information from LAI is also effective in AGB mapping for some crop types. Full article
Show Figures

Graphical abstract

16 pages, 7553 KiB  
Article
Evaluating the Impact of the 2020 Iowa Derecho on Corn and Soybean Fields Using Synthetic Aperture Radar
by Mehdi Hosseini, Hannah R. Kerner, Ritvik Sahajpal, Estefania Puricelli, Yu-Hsiang Lu, Afolarin Fahd Lawal, Michael L. Humber, Mary Mitkish, Seth Meyer and Inbal Becker-Reshef
Remote Sens. 2020, 12(23), 3878; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233878 - 26 Nov 2020
Cited by 17 | Viewed by 4410
Abstract
On 10 August 2020, a series of intense and fast-moving windstorms known as a derecho caused widespread damage across Iowa’s (the top US corn-producing state) agricultural regions. This severe weather event bent and flattened crops over approximately one-third of the state. Immediate evaluation [...] Read more.
On 10 August 2020, a series of intense and fast-moving windstorms known as a derecho caused widespread damage across Iowa’s (the top US corn-producing state) agricultural regions. This severe weather event bent and flattened crops over approximately one-third of the state. Immediate evaluation of the disaster’s impact on agricultural lands, including maps of crop damage, was critical to enabling a rapid response by government agencies, insurance companies, and the agricultural supply chain. Given the very large area impacted by the disaster, satellite imagery stands out as the most efficient means of estimating the disaster impact. In this study, we used time-series of Sentinel-1 data to detect the impacted fields. We developed an in-season crop type map using Harmonized Landsat and Sentinel-2 data to assess the impact on important commodity crops. We intersected a SAR-based damage map with an in-season crop type map to create damaged area maps for corn and soybean fields. In total, we identified 2.59 million acres as damaged by the derecho, consisting of 1.99 million acres of corn and 0.6 million acres of soybean fields. Also, we categorized the impacted fields to three classes of mild impacts, medium impacts and high impacts. In total, 1.087 million acres of corn and 0.206 million acres of soybean were categorized as high impacted fields. Full article
Show Figures

Graphical abstract

22 pages, 3293 KiB  
Article
High-Resolution Soybean Yield Mapping Across the US Midwest Using Subfield Harvester Data
by Walter T. Dado, Jillian M. Deines, Rinkal Patel, Sang-Zi Liang and David B. Lobell
Remote Sens. 2020, 12(21), 3471; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12213471 - 22 Oct 2020
Cited by 16 | Viewed by 4940
Abstract
Cloud computing and freely available, high-resolution satellite data have enabled recent progress in crop yield mapping at fine scales. However, extensive validation data at a matching resolution remain uncommon or infeasible due to data availability. This has limited the ability to evaluate different [...] Read more.
Cloud computing and freely available, high-resolution satellite data have enabled recent progress in crop yield mapping at fine scales. However, extensive validation data at a matching resolution remain uncommon or infeasible due to data availability. This has limited the ability to evaluate different yield estimation models and improve understanding of key features useful for yield estimation in both data-rich and data-poor contexts. Here, we assess machine learning models’ capacity for soybean yield prediction using a unique ground-truth dataset of high-resolution (5 m) yield maps generated from combine harvester yield monitor data for over a million field-year observations across the Midwestern United States from 2008 to 2018. First, we compare random forest (RF) implementations, testing a range of feature engineering approaches using Sentinel-2 and Landsat spectral data for 20- and 30-m scale yield prediction. We find that Sentinel-2-based models can explain up to 45% of out-of-sample yield variability from 2017 to 2018 (r2 = 0.45), while Landsat models explain up to 43% across the longer 2008–2018 period. Using discrete Fourier transforms, or harmonic regressions, to capture soybean phenology improved the Landsat-based model considerably. Second, we compare RF models trained using this ground-truth data to models trained on available county-level statistics. We find that county-level models rely more heavily on just a few predictors, namely August weather covariates (vapor pressure deficit, rainfall, temperature) and July and August near-infrared observations. As a result, county-scale models perform relatively poorly on field-scale validation (r2 = 0.32), especially for high-yielding fields, but perform similarly to field-scale models when evaluated at the county scale (r2 = 0.82). Finally, we test whether our findings on variable importance can inform a simple, generalizable framework for regions or time periods beyond ground data availability. To do so, we test improvements to a Scalable Crop Yield Mapper (SCYM) approach that uses crop simulations to train statistical models for yield estimation. Based on findings from our RF models, we employ harmonic regressions to estimate peak vegetation index (VI) and a VI observation 30 days later, with August rainfall as the sole weather covariate in our new SCYM model. Modifications improved SCYM’s explained variance (r2 = 0.27 at the 30 m scale) and provide a new, parsimonious model. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

15 pages, 4320 KiB  
Technical Note
Creating a Field-Wide Forage Canopy Model Using UAVs and Photogrammetry Processing
by Cameron Minch, Joseph Dvorak, Josh Jackson and Stuart Tucker Sheffield
Remote Sens. 2021, 13(13), 2487; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132487 - 25 Jun 2021
Cited by 4 | Viewed by 1892
Abstract
Alfalfa canopy structure reveals useful information for managing this forage crop, but manual measurements are impractical at field-scale. Photogrammetry processing with images from Unmanned Aerial Vehicles (UAVs) can create a field-wide three-dimensional model of the crop canopy. The goal of this study was [...] Read more.
Alfalfa canopy structure reveals useful information for managing this forage crop, but manual measurements are impractical at field-scale. Photogrammetry processing with images from Unmanned Aerial Vehicles (UAVs) can create a field-wide three-dimensional model of the crop canopy. The goal of this study was to determine the appropriate flight parameters for the UAV that would enable reliable generation of canopy models at all stages of alfalfa growth. Flights were conducted over two separate fields on four different dates using three different flight parameters. This provided a total of 24 flights. The flight parameters considered were the following: 30 m altitude with 90° camera gimbal angle, 50 m altitude with 90° camera gimbal angle, and 50 m altitude with 75° camera gimbal angle. A total of 32 three-dimensional canopy models were created using photogrammetry. Images from each of the 24 flights were used to create 24 separate models and images from multiple flights were combined to create an additional eight models. The models were analyzed based on Model Ground Sampling Distance (GSD), Model Root Mean Square Error (RMSE), and camera calibration difference. Of the 32 attempted models, 30 or 94% were judged acceptable. The models were then used to estimate alfalfa yield and the best yield estimates occurred with flights at a 50 m altitude with a 75° camera gimbal angle; therefore, these flight parameters are suggested for the most consistent results. Full article
Show Figures

Figure 1

Back to TopTop