remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing and Associated Artificial Intelligence in Agricultural Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 20277

Special Issue Editors


E-Mail Website
Guest Editor
Department of Agricultural, Forest and Food Sciences, University of Turin, L.go Braccini 2, 10095 Grugliasco, Italy
Interests: remote sensing; spatial analysis and landscape planning; GIS; digital photogrammetry; precision farming; lidar
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Agricultural, Forest and Food Sciences, University of Turin, L.go Braccini 2, 10095 Grugliasco, Italy
Interests: remote sensing; GIS, agriculture; CAP; Sentinel-2; crop classification

E-Mail Website
Guest Editor
Department of Agricultural, Forest and Food Sciences, University of Turin, L.go Braccini 2, 10095 Grugliasco, Italy
Interests: remote sensing; forestry; SAR; photogrammetry; interferometry

E-Mail Website
Guest Editor
Department of Agricultural, Forest and Food Sciences (DISAFA)—GEO4Agri DISAFA Lab, Università degli Studi di Torino Largo Paolo Braccini 2, Grugliasco, TO 10095, Italy;Earth Observation Valle d'Aosta (eoVdA), Località L'Île-Blonde, 5, 11020 Brissogne (AO), Italy
Interests: Remote Sensing; GIS; Forestry & Environmental Sciences; Google Earth Engine; Photogrammetry; Drones; Ecology; Agriculture; LST; Climate change; Geostatistics; R; SAGA GIS; Land Cover; Change detection

Special Issue Information

Dear Colleagues,

This Special Issue on “Remote Sensing and Associated Artificial Intelligence in Agricultural Applications” aims to assemble high-level contributions providing an exhaustive overview of the ongoing geomatics and remote-sensing-related techniques making use of artificial intelligence to support agricultural applications.

The following topics are strongly encouraged:

  • Research experiences relating to the potentialities and limits of AI in supporting remote-sensing-based applications in agricultural and forest contexts. Special focus is placed on the comparison between AI-based and traditional approaches is highly desirable, with the aim of pointing out whether, when and where AI can be successfully and undoubtedly used in place of more ordinary and explicit approaches.
  • AI for data integration aimed at maximizing the exploitation of spatial, temporal, and spectral features of sensors from different platforms with special concern about scalable approaches relying of the adoption of RPAS, aerial and satellite datasets.
  • AI for supporting remote-sensing-based services in agriculture and its relationship with data integration and analysis systems (DIASs), high-performance computing (HPC) and Internet of Things (IoT).
  • AI for hyper/multi-spectral image interpretation/classification.
  • AI for point cloud interpretation from digital photogrammetry and LiDAR systems.
  • AI for time trends analysis and interpretation (e.g., crop phenology detection and forecasting, drought trend modelling, etc.).
  • AI to support decision-support systems for crop management (irrigation, fertilization, crop protections, etc.) based on the integration of satellite, meteorological and field data.
  • Economical analyses of future trends of the technology transfer process of AI towards the agricultural sector, scenarios of profit and reports about the feelings of farmers regarding AI introduction in their ordinary workflow.

All other proposals related to the adoption of AI applied to remote sensing in agriculture will also be evaluated.

Prof. Dr. Enrico Corrado Borgogno
Dr. Filippo Sarvia
Dr. Samuele De Petris
Dr. Tommaso Orusa
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • agricultural application
  • common agricultural policy (CAP)
  • precision agriculture
  • machine learning
  • service prototype development
  • crop monitoring
  • crop detection
  • deep learning
  • artificial intelligence
  • crop classification
  • yield prediction
  • GIS application
  • precision farming
  • remotely piloted aircraft systems (RPASs)
  • image segmentation

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 6610 KiB  
Article
Estimating Sugarcane Aboveground Biomass and Carbon Stock Using the Combined Time Series of Sentinel Data with Machine Learning Algorithms
by Savittri Ratanopad Suwanlee, Dusadee Pinasu, Jaturong Som-ard, Enrico Borgogno-Mondino and Filippo Sarvia
Remote Sens. 2024, 16(5), 750; https://0-doi-org.brum.beds.ac.uk/10.3390/rs16050750 - 22 Feb 2024
Viewed by 1530
Abstract
Accurately mapping crop aboveground biomass (AGB) in a timely manner is crucial for promoting sustainable agricultural practices and effective climate change mitigation actions. To address this challenge, the integration of satellite-based Earth Observation (EO) data with advanced machine learning algorithms offers promising prospects [...] Read more.
Accurately mapping crop aboveground biomass (AGB) in a timely manner is crucial for promoting sustainable agricultural practices and effective climate change mitigation actions. To address this challenge, the integration of satellite-based Earth Observation (EO) data with advanced machine learning algorithms offers promising prospects to monitor land and crop phenology over time. However, achieving accurate AGB maps in small crop fields and complex landscapes is still an ongoing challenge. In this study, the AGB was estimated for small sugarcane fields (<1 ha) located in the Kumphawapi district of Udon Thani province, Thailand. Specifically, in order to explore, estimate, and map sugarcane AGB and carbon stock for the 2018 and 2021 years, ground measurements and time series of Sentinel-1 (S1) and Sentinel-2 (S2) data were used and random forest regression (RFR) and support vector regression (SVR) applied. Subsequently, optimized predictive models used to generate large-scale maps were adapted. The RFR models demonstrated high efficiency and consistency when compared to the SVR models for the two years considered. Specifically, the resulting AGB maps displayed noteworthy accuracy, with the coefficient of determination (R2) as 0.85 and 0.86 with a root mean square error (RMSE) of 8.84 and 9.61 t/ha for the years 2018 and 2021, respectively. In addition, mapping sugarcane AGB and carbon stock across a large scale showed high spatial variability within fields for both base years. These results exhibited a high potential for effectively depicting the spatial distribution of AGB densities. Finally, it was shown how these highly accurate maps can support, as valuable tools, sustainable agricultural practices, government policy, and decision-making processes. Full article
Show Figures

Figure 1

19 pages, 9624 KiB  
Article
Crop Type Identification Using High-Resolution Remote Sensing Images Based on an Improved DeepLabV3+ Network
by Zhu Chang, Hu Li, Donghua Chen, Yufeng Liu, Chen Zou, Jian Chen, Weijie Han, Saisai Liu and Naiming Zhang
Remote Sens. 2023, 15(21), 5088; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15215088 - 24 Oct 2023
Cited by 2 | Viewed by 2385
Abstract
Remote sensing technology has become a popular tool for crop classification, but it faces challenges in accurately identifying crops in areas with fragmented land plots and complex planting structures. To address this issue, we propose an improved method for crop identification in high-resolution [...] Read more.
Remote sensing technology has become a popular tool for crop classification, but it faces challenges in accurately identifying crops in areas with fragmented land plots and complex planting structures. To address this issue, we propose an improved method for crop identification in high-resolution remote sensing images, achieved by modifying the DeepLab V3+ semantic segmentation network. In this paper, the typical crop area in the Jianghuai watershed is taken as the experimental area, and Gaofen-2 satellite images with high spatial resolutions are used as the data source. Based on the original DeepLab V3+ model, CI and OSAVI vegetation indices are added to the input layers, and MobileNet V2 is used as the backbone network. Meanwhile, the upper sampling layer of the network is added, and the attention mechanism is added to the ASPP and the upper sampling layers. The accuracy verification of the identification results shows that the MIoU and PA of this model in the test set reach 85.63% and 95.30%, the IoU and F1_Score of wheat are 93.76% and 96.78%, and the IoU and F1_Score of rape are 74.24% and 85.51%, respectively. The identification accuracy of this model is significantly better than that of the original DeepLab V3+ model and other related models. The proposed method in this paper can accurately extract the distribution information of wheat and rape from high-resolution remote sensing images. This provides a new technical approach for the application of high-resolution remote sensing images in identifying wheat and rape. Full article
Show Figures

Figure 1

19 pages, 5377 KiB  
Article
Research on the Shape Classification Method of Rural Homesteads Based on Parcel Scale—Taking Yangdun Village as an Example
by Jie Zhang, Beilei Fan, Hao Li, Yunfei Liu, Ren Wei and Shengping Liu
Remote Sens. 2023, 15(19), 4763; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15194763 - 28 Sep 2023
Viewed by 748
Abstract
The basic information survey on homesteads requires understanding the shape of homesteads, and the shape of the homesteads based on the spatial location can reflect information such as their outline and regularity, but the current shape classification of rural homesteads at the parcel [...] Read more.
The basic information survey on homesteads requires understanding the shape of homesteads, and the shape of the homesteads based on the spatial location can reflect information such as their outline and regularity, but the current shape classification of rural homesteads at the parcel scale lacks analytical methods. In this study, we endeavor to explore a classification model suitable for characterizing homestead shapes at the parcel scale by assessing the impact of various research methods. Additionally, we aim to uncover the evolutionary patterns in homestead shapes. The study focuses on Yangdun Village, located in Deqing County, Zhejiang Province, as the research area. The data utilized comprise Google Earth satellite imagery and a vector layer representing homesteads at the parcel scale. To classify the shapes of homesteads and compare classification accuracy, we employ a combination of methods, including the fast Fourier transform (FFT), Hu invariant moments (HIM), the Boyce and Clark shape index (BCSI), and the AlexNet model. Our findings reveal the following: (1) The random forest method, when coupled with FFT, demonstrates the highest effectiveness in identifying the shape categories of homesteads, achieving an average accuracy rate of 88.6%. (2) Combining multiple methods does not enhance recognition accuracy; for instance, the accuracy of the FFT + HIM combination was 88.4%. (3) The Boyce and Clark shape index (BCSI) proves unsuitable for classifying homestead shapes, yielding an average accuracy rate of only 58%. Furthermore, there is no precise numerical correlation between the homestead category and the shape index. (4) It is noteworthy that over half of the homesteads in Yangdun Village exhibit rectangular-like shapes. Following the “homesteads reform”, square-like homesteads have experienced significant vacating, resulting in a mixed arrangement of homesteads overall. The research findings can serve as a methodological reference for the investigation of rural homestead shapes. Proficiency in homestead shape classification holds significant importance in the realms of information investigation, regular management, and layout optimization of rural land. Full article
Show Figures

Graphical abstract

21 pages, 3484 KiB  
Article
Monitoring Agricultural Land and Land Cover Change from 2001–2021 of the Chi River Basin, Thailand Using Multi-Temporal Landsat Data Based on Google Earth Engine
by Savittri Ratanopad Suwanlee, Surasak Keawsomsee, Morakot Pengjunsang, Nudthawud Homtong, Amornchai Prakobya, Enrico Borgogno-Mondino, Filippo Sarvia and Jaturong Som-ard
Remote Sens. 2023, 15(17), 4339; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15174339 - 03 Sep 2023
Viewed by 3217
Abstract
In recent years, climate change has greatly affected agricultural activity, sustainability and production, making it difficult to conduct crop management and food security assessment. As a consequence, significant changes in agricultural land and land cover (LC) have occurred, mostly due to the introduction [...] Read more.
In recent years, climate change has greatly affected agricultural activity, sustainability and production, making it difficult to conduct crop management and food security assessment. As a consequence, significant changes in agricultural land and land cover (LC) have occurred, mostly due to the introduction of new agricultural practices, techniques and crops. Earth Observation (EO) data, cloud-computing platforms and powerful machine learning methods can certainly support analysis within the agricultural context. Therefore, accurate and updated agricultural land and LC maps can be useful to derive valuable information for land change monitoring, trend planning, decision-making and sustainable land management. In this context, this study aims at monitoring temporal and spatial changes between 2001 and 2021 (with a four 5-year periods) within the Chi River Basin (NE–Thailand). Specifically, all available Landsat archives and the random forest (RF) classifier were jointly involved within the Google Earth Engine (GEE) platform in order to: (i) generate five different crop type maps (focusing on rice, cassava, para rubber and sugarcane classes), and (ii) monitoring the agricultural land transitions over time. For each crop map, a confusion matrix and the correspondent accuracy were computed and tested according to a validation dataset. In particular, an overall accuracy > 88% was found in all of the resulting five crop maps (for the years 2001, 2006, 2011, 2016 and 2021). Subsequently the agricultural land transitions were analyzed, and a total of 18,957 km2 were found as changed (54.5% of the area) within the 20 years (2001–2021). In particular, an increase in cassava and para rubber areas were found at the disadvantage of rice fields, probably due to two different key drivers taken over time: the agricultural policy and staple price. Finally, it is worth highlighting that such results turn out to be decisive in a challenging agricultural environment such as the Thai one. In particular, the high accuracy of the five derived crop type maps can be useful to provide spatial consistency and reliable information to support local sustainable agriculture land management, decisions of policymakers and many stakeholders. Full article
Show Figures

Figure 1

21 pages, 4714 KiB  
Article
A Lightweight Winter Wheat Planting Area Extraction Model Based on Improved DeepLabv3+ and CBAM
by Yao Zhang, Hong Wang, Jiahao Liu, Xili Zhao, Yuting Lu, Tengfei Qu, Haozhe Tian, Jingru Su, Dingsheng Luo and Yalei Yang
Remote Sens. 2023, 15(17), 4156; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15174156 - 24 Aug 2023
Cited by 4 | Viewed by 1353
Abstract
This paper focuses on the problems of inaccurate extraction of winter wheat edges from high-resolution images, misclassification and omission due to intraclass differences as well as the large number of network parameters and long training time of existing classical semantic segmentation models. This [...] Read more.
This paper focuses on the problems of inaccurate extraction of winter wheat edges from high-resolution images, misclassification and omission due to intraclass differences as well as the large number of network parameters and long training time of existing classical semantic segmentation models. This paper proposes a lightweight winter wheat planting area extraction model that combines the DeepLabv3+ model and a dual-attention mechanism. The model uses the lightweight network MobileNetv2 to replace the backbone network Xception of DeepLabv3+ to reduce the number of parameters and improve the training speed. It also introduces the lightweight Convolutional Block Attention Module (CBAM) dual-attention mechanism to extract winter wheat feature information more accurately and efficiently. Finally, the model is used to complete the dataset creation, model training, winter wheat plantation extraction, and accuracy evaluation. The results show that the improved lightweight DeepLabv3+ model in this paper has high reliability in the recognition extraction of winter wheat, and its recognition results of OA, mPA, and mIoU reach 95.28%, 94.40%, and 89.79%, respectively, which are 1.52%, 1.51%, and 2.99% higher than those for the original DeepLabv3+ model. Meanwhile, the model’s recognition accuracy was much higher than that of the three classical semantic segmentation models of UNet, ResUNet and PSPNet. The improved lightweight DeepLabv3+ also has far fewer model parameters and training time than the other four models. The model has been tested in other regions, and the results show that it has good generalization ability. The model in general ensures the extraction accuracy while significantly reducing the number of parameters and satisfying the timeliness, which can achieve the fast and accurate extraction of winter wheat planting sites and has good application prospects. Full article
Show Figures

Figure 1

22 pages, 7260 KiB  
Article
MLGNet: Multi-Task Learning Network with Attention-Guided Mechanism for Segmenting Agricultural Fields
by Weiran Luo, Chengcai Zhang, Ying Li and Yaning Yan
Remote Sens. 2023, 15(16), 3934; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15163934 - 08 Aug 2023
Cited by 1 | Viewed by 1007
Abstract
The implementation of precise agricultural fields can drive the intelligent development of agricultural production, and high-resolution remote sensing images provide convenience for obtaining precise fields. With the advancement of spatial resolution, the complexity and heterogeneity of land features are accentuated, making it challenging [...] Read more.
The implementation of precise agricultural fields can drive the intelligent development of agricultural production, and high-resolution remote sensing images provide convenience for obtaining precise fields. With the advancement of spatial resolution, the complexity and heterogeneity of land features are accentuated, making it challenging for existing methods to obtain structurally complete fields, especially in regions with blurred edges. Therefore, a multi-task learning network with attention-guided mechanism is introduced for segmenting agricultural fields. To be more specific, the attention-guided fusion module is used to learn complementary information layer by layer, while the multi-task learning scheme considers both edge detection and semantic segmentation task. Based on this, we further segmented the merged fields using broken edges, following the theory of connectivity perception. Finally, we chose three cities in The Netherlands as study areas for experimentation, and evaluated the extracted field regions and edges separately, the results showed that (1) The proposed method achieved the highest accuracy in three cities, with IoU of 91.27%, 93.05% and 89.76%, respectively. (2) The Qua metrics of the processed edges demonstrated improvements of 6%, 6%, and 5%, respectively. This work successfully segmented potential fields with blurred edges, indicating its potential for precision agriculture development. Full article
Show Figures

Figure 1

20 pages, 4671 KiB  
Article
Trap-Based Pest Counting: Multiscale and Deformable Attention CenterNet Integrating Internal LR and HR Joint Feature Learning
by Jae-Hyeon Lee and Chang-Hwan Son
Remote Sens. 2023, 15(15), 3810; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15153810 - 31 Jul 2023
Cited by 1 | Viewed by 909
Abstract
Pest counting, which predicts the number of pests in the early stage, is very important because it enables rapid pest control, reduces damage to crops, and improves productivity. In recent years, light traps have been increasingly used to lure and photograph pests for [...] Read more.
Pest counting, which predicts the number of pests in the early stage, is very important because it enables rapid pest control, reduces damage to crops, and improves productivity. In recent years, light traps have been increasingly used to lure and photograph pests for pest counting. However, pest images have a wide range of variability in pest appearance owing to severe occlusion, wide pose variation, and even scale variation. This makes pest counting more challenging. To address these issues, this study proposes a new pest counting model referred to as multiscale and deformable attention CenterNet (Mada-CenterNet) for internal low-resolution (LR) and high-resolution (HR) joint feature learning. Compared with the conventional CenterNet, the proposed Mada-CenterNet adopts a multiscale heatmap generation approach in a two-step fashion to predict LR and HR heatmaps adaptively learned to scale variations, that is, changes in the number of pests. In addition, to overcome the pose and occlusion problems, a new between-hourglass skip connection based on deformable and multiscale attention is designed to ensure internal LR and HR joint feature learning and incorporate geometric deformation, thereby resulting in improved pest counting accuracy. Through experiments, the proposed Mada-CenterNet is verified to generate the HR heatmap more accurately and improve pest counting accuracy owing to multiscale heatmap generation, joint internal feature learning, and deformable and multiscale attention. In addition, the proposed model is confirmed to be effective in overcoming severe occlusions and variations in pose and scale. The experimental results show that the proposed model outperforms state-of-the-art crowd counting and object detection models. Full article
Show Figures

Figure 1

24 pages, 6247 KiB  
Article
YOLOv7-MA: Improved YOLOv7-Based Wheat Head Detection and Counting
by Xiaopeng Meng, Changchun Li, Jingbo Li, Xinyan Li, Fuchen Guo and Zhen Xiao
Remote Sens. 2023, 15(15), 3770; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15153770 - 29 Jul 2023
Cited by 4 | Viewed by 2166
Abstract
Detection and counting of wheat heads are crucial for wheat yield estimation. To address the issues of overlapping and small volumes of wheat heads on complex backgrounds, this paper proposes the YOLOv7-MA model. By introducing micro-scale detection layers and the convolutional block attention [...] Read more.
Detection and counting of wheat heads are crucial for wheat yield estimation. To address the issues of overlapping and small volumes of wheat heads on complex backgrounds, this paper proposes the YOLOv7-MA model. By introducing micro-scale detection layers and the convolutional block attention module, the model enhances the target information of wheat heads and weakens the background information, thereby strengthening its ability to detect small wheat heads and improving the detection performance. Experimental results indicate that after being trained and tested on the Global Wheat Head Dataset 2021, the YOLOv7-MA model achieves a mean average precision (MAP) of 93.86% with a detection speed of 35.93 frames per second (FPS), outperforming Faster-RCNN, YOLOv5, YOLOX, and YOLOv7 models. Meanwhile, when tested under the three conditions of low illumination, blur, and occlusion, the coefficient of determination (R2) of YOLOv7-MA is respectively 0.9895, 0.9872, and 0.9882, and the correlation between the predicted wheat head number and the manual counting result is stronger than others. In addition, when the YOLOv7-MA model is transferred to field-collected wheat head datasets, it maintains high performance with MAP in maturity and filling stages of 93.33% and 93.03%, respectively, and R2 values of 0.9632 and 0.9155, respectively, demonstrating better performance in the maturity stage. Overall, YOLOv7-MA has achieved accurate identification and counting of wheat heads in complex field backgrounds. In the future, its application with unmanned aerial vehicles (UAVs) can provide technical support for large-scale wheat yield estimation in the field. Full article
Show Figures

Figure 1

21 pages, 5205 KiB  
Article
Bi-Objective Crop Mapping from Sentinel-2 Images Based on Multiple Deep Learning Networks
by Weicheng Song, Aiqing Feng, Guojie Wang, Qixia Zhang, Wen Dai, Xikun Wei, Yifan Hu, Solomon Obiri Yeboah Amankwah, Feihong Zhou and Yi Liu
Remote Sens. 2023, 15(13), 3417; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15133417 - 06 Jul 2023
Cited by 3 | Viewed by 1467
Abstract
Accurate assessment of the extent of crop distribution and mapping different crop types are essential for monitoring and managing modern agriculture. Medium and high spatial resolution remote sensing (RS) for Earth observation and deep learning (DL) constitute one of the most major and [...] Read more.
Accurate assessment of the extent of crop distribution and mapping different crop types are essential for monitoring and managing modern agriculture. Medium and high spatial resolution remote sensing (RS) for Earth observation and deep learning (DL) constitute one of the most major and effective tools for crop mapping. In this study, we used high-resolution Sentinel-2 imagery from Google Earth Engine (GEE) to map paddy rice and winter wheat in the Bengbu city of Anhui Province, China. We compared the performance of different popular DL backbone networks with the traditional machine learning (ML) methods, including HRNet, MobileNet, Xception, and Swin Transformer, within the improved DeepLabv3+ architecture, Segformer and random forest (RF). The results showed that the Segformer based on the combination of the Transformer architecture encoder and the lightweight multilayer perceptron (MLP) decoder achieved an overall accuracy (OA) value of 91.06%, a mean F1 Score (mF1) value of 89.26% and a mean Intersection over Union (mIoU) value of 80.70%. The Segformer outperformed other DL methods by combining the results of multiple evaluation metrics. Except for Swin Transformer, which was slightly lower than RF in OA, all DL methods significantly outperformed RF methods in accuracy for the main mapping objects, with mIoU improving by about 13.5~26%. The predicted images of paddy rice and winter wheat from the Segformer were characterized by high mapping accuracy, clear field edges, distinct detail features and a low false classification rate. Consequently, DL is an efficient option for fast and accurate mapping of paddy rice and winter wheat based on RS imagery. Full article
Show Figures

Figure 1

26 pages, 2261 KiB  
Article
AgriSen-COG, a Multicountry, Multitemporal Large-Scale Sentinel-2 Benchmark Dataset for Crop Mapping Using Deep Learning
by Teodora Selea
Remote Sens. 2023, 15(12), 2980; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15122980 - 07 Jun 2023
Cited by 3 | Viewed by 1877
Abstract
With the increasing volume of collected Earth observation (EO) data, artificial intelligence (AI) methods have become state-of-the-art in processing and analyzing them. However, there is still a lack of high-quality, large-scale EO datasets for training robust networks. This paper presents AgriSen-COG, a large-scale [...] Read more.
With the increasing volume of collected Earth observation (EO) data, artificial intelligence (AI) methods have become state-of-the-art in processing and analyzing them. However, there is still a lack of high-quality, large-scale EO datasets for training robust networks. This paper presents AgriSen-COG, a large-scale benchmark dataset for crop type mapping based on Sentinel-2 data. AgriSen-COG deals with the challenges of remote sensing (RS) datasets. First, it includes data from five different European countries (Austria, Belgium, Spain, Denmark, and the Netherlands), targeting the problem of domain adaptation. Second, it is multitemporal and multiyear (2019–2020), therefore enabling analysis based on the growth of crops in time and yearly variability. Third, AgriSen-COG includes an anomaly detection preprocessing step, which reduces the amount of mislabeled information. AgriSen-COG comprises 6,972,485 parcels, making it the most extensive available dataset for crop type mapping. It includes two types of data: pixel-level data and parcel aggregated information. By carrying this out, we target two computer vision (CV) problems: semantic segmentation and classification. To establish the validity of the proposed dataset, we conducted several experiments using state-of-the-art deep-learning models for temporal semantic segmentation with pixel-level data (U-Net and ConvStar networks) and time-series classification with parcel aggregated information (LSTM, Transformer, TempCNN networks). The most popular models (U-Net and LSTM) achieve the best performance in the Belgium region, with a weighted F1 score of 0.956 (U-Net) and 0.918 (LSTM).The proposed data are distributed as a cloud-optimized GeoTIFF (COG), together with a SpatioTemporal Asset Catalog (STAC), which makes AgriSen-COG a findable, accessible, interoperable, and reusable (FAIR) dataset. Full article
Show Figures

Figure 1

18 pages, 7342 KiB  
Article
Detecting Cassava Plants under Different Field Conditions Using UAV-Based RGB Images and Deep Learning Models
by Emmanuel C. Nnadozie, Ogechukwu N. Iloanusi, Ozoemena A. Ani and Kang Yu
Remote Sens. 2023, 15(9), 2322; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15092322 - 28 Apr 2023
Cited by 4 | Viewed by 1933
Abstract
A significant number of object detection models have been researched for use in plant detection. However, deployment and evaluation of the models for real-time detection as well as for crop counting under varying real field conditions is lacking. In this work, two versions [...] Read more.
A significant number of object detection models have been researched for use in plant detection. However, deployment and evaluation of the models for real-time detection as well as for crop counting under varying real field conditions is lacking. In this work, two versions of a state-of-the-art object detection model—YOLOv5n and YOLOv5s—were deployed and evaluated for cassava detection. We compared the performance of the models when trained with different input image resolutions, images of different growth stages, weed interference, and illumination conditions. The models were deployed on an NVIDIA Jetson AGX Orin embedded GPU in order to observe the real-time performance of the models. Results of a use case in a farm field showed that YOLOv5s yielded the best accuracy whereas YOLOv5n had the best inference speed in detecting cassava plants. YOLOv5s allowed for more precise crop counting, compared to the YOLOv5n which mis-detected cassava plants. YOLOv5s performed better under weed interference at the cost of a low speed. The findings of this work may serve to as a reference for making a choice of which model fits an intended real-life plant detection application, taking into consideration the need for a trade-off between of detection speed, detection accuracy, and memory usage. Full article
Show Figures

Figure 1

Back to TopTop