Computer Vision and Deep Learning Technology in Agriculture: Volume II

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Precision and Digital Agriculture".

Deadline for manuscript submissions: 30 April 2024 | Viewed by 1788

Special Issue Editor


E-Mail Website
Guest Editor
College of Water Resources and Civil Engineering, China Agricultural University, Beijing, China
Interests: crop traits estimation; crop growth monitoring
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Agriculture is a complex and unpredictable field. Computer vision, in the form of visible data processing, may improve our understanding of various aspects of agriculture. Another technique known as deep learning—A modern form of image processing and data analysis—offers promising results. Recent advances in deep learning have significantly promoted computer vision applications in agriculture, providing solutions to many long-lasting challenges.

This Special Issue of Agronomy will address advances in the development and application of computer vision, machine learning, and deep learning techniques with agricultural applications. Papers describing new techniques for processing high-resolution images collected via RGB, multispectral and hyperspectral sensors from the air (e.g., UAVs) or from the ground are also welcome. We encourage you to submit your research on state-of-the-art applications of computer vision and deep learning in agriculture to this Special Issue.

Dr. Juncheng Ma
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • precision agriculture
  • computer vision
  • deep learning
  • machine learning
  • image processing

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 10296 KiB  
Article
RICE-YOLO: In-Field Rice Spike Detection Based on Improved YOLOv5 and Drone Images
by Maoyang Lan, Changjiang Liu, Huiwen Zheng, Yuwei Wang, Wenxi Cai, Yingtong Peng, Chudong Xu and Suiyan Tan
Agronomy 2024, 14(4), 836; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy14040836 - 17 Apr 2024
Viewed by 290
Abstract
The rice spike, a crucial part of rice plants, plays a vital role in yield estimation, pest detection, and growth stage management in rice cultivation. When using drones to capture photos of rice fields, the high shooting angle and wide coverage area can [...] Read more.
The rice spike, a crucial part of rice plants, plays a vital role in yield estimation, pest detection, and growth stage management in rice cultivation. When using drones to capture photos of rice fields, the high shooting angle and wide coverage area can cause rice spikes to appear small in the captured images and can cause angular distortion of objects at the edges of images, resulting in significant occlusions and dense arrangements of rice spikes. These factors are unique challenges during drone image acquisition that may affect the accuracy of rice spike detection. This study proposes a rice spike detection method that combines deep learning algorithms with drone perspectives. Initially, based on an enhanced version of YOLOv5, the EMA (efficient multiscale attention) attention mechanism is introduced, a novel neck network structure is designed, and SIoU (SCYLLA intersection over union) is integrated. Experimental results demonstrate that RICE-YOLO achieves a [email protected] of 94.8% and a recall of 87.6% on the rice spike dataset. During different growth stages, it attains an [email protected] of 96.1% and a recall rate of 93.1% during the heading stage, and a [email protected] of 86.2% with a recall rate of 82.6% during the filling stage. Overall, the results indicate that the proposed method enables real-time, efficient, and accurate detection and counting of rice spikes in field environments, offering a theoretical foundation and technical support for real-time and efficient spike detection in the management of rice growth processes. Full article
Show Figures

Figure 1

13 pages, 10486 KiB  
Article
A Method for Analyzing the Phenotypes of Nonheading Chinese Cabbage Leaves Based on Deep Learning and OpenCV Phenotype Extraction
by Haobin Xu, Linxiao Fu, Jinnian Li, Xiaoyu Lin, Lingxiao Chen, Fenglin Zhong and Maomao Hou
Agronomy 2024, 14(4), 699; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy14040699 - 28 Mar 2024
Viewed by 508
Abstract
Nonheading Chinese cabbage is an important leafy vegetable, and quantitative identification and automated analysis of nonheading Chinese cabbage leaves are crucial for cultivating new varieties with higher quality, yield, and resistance. Traditional leaf phenotypic analysis relies mainly on phenotypic observation and the practical [...] Read more.
Nonheading Chinese cabbage is an important leafy vegetable, and quantitative identification and automated analysis of nonheading Chinese cabbage leaves are crucial for cultivating new varieties with higher quality, yield, and resistance. Traditional leaf phenotypic analysis relies mainly on phenotypic observation and the practical experience of breeders, leading to issues such as time consumption, labor intensity, and low precision, which result in low breeding efficiency. Considering these issues, a method for the extraction and analysis of phenotypes of nonheading Chinese cabbage leaves is proposed, targeting four qualitative traits and ten quantitative traits from 1500 samples, by integrating deep learning and OpenCV image processing technology. First, a leaf classification model is trained using YOLOv8 to infer the qualitative traits of the leaves, followed by the extraction and calculation of the quantitative traits of the leaves using OpenCV image processing technology. The results indicate that the model achieved an average accuracy of 95.25%, an average precision of 96.09%, an average recall rate of 96.31%, and an average F1 score of 0.9620 for the four qualitative traits. From the ten quantitative traits, the OpenCV-calculated values for the whole leaf length, leaf width, and total leaf area were compared with manually measured values, showing RMSEs of 0.19 cm, 0.1762 cm, and 0.2161 cm2, respectively. Bland–Altman analysis indicated that the error values were all within the 95% confidence intervals, and the average detection time per image was 269 ms. This method achieved good results in the extraction of phenotypic traits from nonheading Chinese cabbage leaves, significantly reducing the personpower and time costs associated with genetic resource analysis. This approach provides a new technique for the analysis of nonheading Chinese cabbage genetic resources that is high-throughput, precise, and automated. Full article
Show Figures

Figure 1

24 pages, 8939 KiB  
Article
YOLOv7-GCA: A Lightweight and High-Performance Model for Pepper Disease Detection
by Xuejun Yue, Haifeng Li, Qingkui Song, Fanguo Zeng, Jianyu Zheng, Ziyu Ding, Gaobi Kang, Yulin Cai, Yongda Lin, Xiaowan Xu and Chaoran Yu
Agronomy 2024, 14(3), 618; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy14030618 - 19 Mar 2024
Viewed by 605
Abstract
Existing disease detection models for deep learning-based monitoring and prevention of pepper diseases face challenges in accurately identifying and preventing diseases due to inter-crop occlusion and various complex backgrounds. To address this issue, we propose a modified YOLOv7-GCA model based on YOLOv7 for [...] Read more.
Existing disease detection models for deep learning-based monitoring and prevention of pepper diseases face challenges in accurately identifying and preventing diseases due to inter-crop occlusion and various complex backgrounds. To address this issue, we propose a modified YOLOv7-GCA model based on YOLOv7 for pepper disease detection, which can effectively overcome these challenges. The model introduces three key enhancements: Firstly, lightweight GhostNetV2 is used as the feature extraction network of the model to improve the detection speed. Secondly, the Cascading fusion network (CFNet) replaces the original feature fusion network, which improves the expression ability of the model in complex backgrounds and realizes multi-scale feature extraction and fusion. Finally, the Convolutional Block Attention Module (CBAM) is introduced to focus on the important features in the images and improve the accuracy and robustness of the model. This study uses the collected dataset, which was processed to construct a dataset of 1259 images with four types of pepper diseases: anthracnose, bacterial diseases, umbilical rot, and viral diseases. We applied data augmentation to the collected dataset, and then experimental verification was carried out on this dataset. The experimental results demonstrate that the YOLOv7-GCA model reduces the parameter count by 34.3% compared to the YOLOv7 original model while improving 13.4% in mAP and 124 frames/s in detection speed. Additionally, the model size was reduced from 74.8 MB to 46.9 MB, which facilitates the deployment of the model on mobile devices. When compared to the other seven mainstream detection models, it was indicated that the YOLOv7-GCA model achieved a balance between speed, model size, and accuracy. This model proves to be a high-performance and lightweight pepper disease detection solution that can provide accurate and timely diagnosis results for farmers and researchers. Full article
Show Figures

Figure 1

Back to TopTop