Imaging Technology for Detecting Crops and Agricultural Products

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Agricultural Biosystem and Biological Engineering".

Deadline for manuscript submissions: closed (25 September 2022) | Viewed by 67634

Special Issue Editors


E-Mail Website
Guest Editor
Department of Botany, University of California, Riverside, CA 92521, USA
Interests: precision agriculture; remote sensing; digital agriculture; yield monitoring
Special Issues, Collections and Topics in MDPI journals
Food, Water, Waste Research Group (FWW), Faculty of Engineering, University of Nottingham, University Park, Nottingham NG7 2RD, UK
Interests: non-invasive food quality assessment; digital food; machine learning; postharvest engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Imaging applications for several purposes in agriculture are rapidly improving, at different scales, and have the potential to be key elements of sustainable agricultural intensification systems. In particular, satellite and drone imagery provides solutions for monitoring field crops and their within-field variability regarding crop health status, weed detection and yield monitoring. Low-altitude imagery and machine-vision applications of agricultural products are having a clear impact on sorting and harvesting automation, for example. Moreover, the current availability of multispectral and hyperspectral sensors and images combined with several data processing and machine-learning techniques facilitate unprecedented ideas and applications in agriculture. Imaging applications are usually coupled with machine-learning algorithms as a means of developing classification and regression models. Deep learning is a relatively new machine-learning technique that has gained importance in different fields in the agri-food chain, especially with the significant advancement of imaging-acquisition hardware, as well as the computational power available from personal computers with high-capability GPUs, as well as high-performance cloud-based computational servers. There is no doubt that imaging applications in agriculture will continue to lead several promising solutions in the current digital agriculture revolution. More research efforts and application ideas are still needed to improve the quality of agricultural products and to support farmers’ decisions considering the different field and crop conditions. The main goal of this Special Issue is to exchange knowledge, ideas, analytical techniques, applications and experiments that use imagery solutions in the field of agricultural applications. 

Dr. Ahmed Kayad
Dr. Ahmed Rady
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • digital agriculture
  • remote sensing
  • weed detection
  • drone
  • RGB imaging
  • thermal imagery
  • object detection
  • hyperspectral imaging
  • machine learning
  • deep learning

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 6787 KiB  
Article
An AI Based Approach for Medicinal Plant Identification Using Deep CNN Based on Global Average Pooling
by Rahim Azadnia, Mohammed Maitham Al-Amidi, Hamed Mohammadi, Mehmet Akif Cifci, Avat Daryab and Eugenio Cavallo
Agronomy 2022, 12(11), 2723; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy12112723 - 02 Nov 2022
Cited by 19 | Viewed by 14073
Abstract
Medicinal plants have always been studied and considered due to their high importance for preserving human health. However, identifying medicinal plants is very time-consuming, tedious and requires an experienced specialist. Hence, a vision-based system can support researchers and ordinary people in recognising herb [...] Read more.
Medicinal plants have always been studied and considered due to their high importance for preserving human health. However, identifying medicinal plants is very time-consuming, tedious and requires an experienced specialist. Hence, a vision-based system can support researchers and ordinary people in recognising herb plants quickly and accurately. Thus, this study proposes an intelligent vision-based system to identify herb plants by developing an automatic Convolutional Neural Network (CNN). The proposed Deep Learning (DL) model consists of a CNN block for feature extraction and a classifier block for classifying the extracted features. The classifier block includes a Global Average Pooling (GAP) layer, a dense layer, a dropout layer, and a softmax layer. The solution has been tested on 3 levels of definitions (64 × 64, 128 × 128 and 256 × 256 pixel) of images for leaf recognition of five different medicinal plants. As a result, the vision-based system achieved more than 99.3% accuracy for all the image definitions. Hence, the proposed method effectively identifies medicinal plants in real-time and is capable of replacing traditional methods. Full article
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products)
Show Figures

Figure 1

19 pages, 4118 KiB  
Article
Deep Learning-Based Leaf Disease Detection in Crops Using Images for Agricultural Applications
by Andrew J., Jennifer Eunice, Daniela Elena Popescu, M. Kalpana Chowdary and Jude Hemanth
Agronomy 2022, 12(10), 2395; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy12102395 - 03 Oct 2022
Cited by 62 | Viewed by 30029
Abstract
The agricultural sector plays a key role in supplying quality food and makes the greatest contribution to growing economies and populations. Plant disease may cause significant losses in food production and eradicate diversity in species. Early diagnosis of plant diseases using accurate or [...] Read more.
The agricultural sector plays a key role in supplying quality food and makes the greatest contribution to growing economies and populations. Plant disease may cause significant losses in food production and eradicate diversity in species. Early diagnosis of plant diseases using accurate or automatic detection techniques can enhance the quality of food production and minimize economic losses. In recent years, deep learning has brought tremendous improvements in the recognition accuracy of image classification and object detection systems. Hence, in this paper, we utilized convolutional neural network (CNN)-based pre-trained models for efficient plant disease identification. We focused on fine tuning the hyperparameters of popular pre-trained models, such as DenseNet-121, ResNet-50, VGG-16, and Inception V4. The experiments were carried out using the popular PlantVillage dataset, which has 54,305 image samples of different plant disease species in 38 classes. The performance of the model was evaluated through classification accuracy, sensitivity, specificity, and F1 score. A comparative analysis was also performed with similar state-of-the-art studies. The experiments proved that DenseNet-121 achieved 99.81% higher classification accuracy, which was superior to state-of-the-art models. Full article
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products)
Show Figures

Figure 1

22 pages, 41775 KiB  
Article
Structure from Linear Motion (SfLM): An On-the-Go Canopy Profiling System Based on Off-the-Shelf RGB Cameras for Effective Sprayers Control
by Luca De Bortoli, Stefano Marsi, Francesco Marinello, Sergio Carrato, Giovanni Ramponi and Paolo Gallina
Agronomy 2022, 12(6), 1276; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy12061276 - 26 May 2022
Cited by 2 | Viewed by 1508
Abstract
Phytosanitary treatment is one of the most critical operations in vineyard management. Ideally, the spraying system should treat only the canopy, avoiding drift, leakage and wasting of product where leaves are not present: variable rate distribution can be a successful approach, allowing the [...] Read more.
Phytosanitary treatment is one of the most critical operations in vineyard management. Ideally, the spraying system should treat only the canopy, avoiding drift, leakage and wasting of product where leaves are not present: variable rate distribution can be a successful approach, allowing the minimization of losses and improving economic as well as environmental performances. The target of this paper is to realize a smart control system to spray phytosanitary treatment just on the leaves, optimizing the overall costs/benefits ratio. Four different optical-based systems for leaf recognition are analyzed, and their performances are compared using a synthetic vineyard model. In the paper, we consider the usage of three well-established methods (infrared barriers, LIDAR 2-D and stereoscopic cameras), and we compare them with an innovative low-cost real-time solution based on a suitable computer vision algorithm that uses a simple monocular camera as input. The proposed algorithm, analyzing the sequence of input frames and exploiting the parallax property, estimates the depth map and eventually reconstructs the profile of the vineyard’s row to be treated. Finally, the performances obtained by the new method are evaluated and compared with those of the other methods on a well-controlled artificial environment resembling an actual vineyard setup while traveling at standard tractor forward speed. Full article
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products)
Show Figures

Figure 1

13 pages, 2574 KiB  
Article
Diversity Characterization of Soybean Germplasm Seeds Using Image Analysis
by Seong-Hoon Kim, Jeong Won Jo, Xiaohan Wang, Myoung-Jae Shin, On Sook Hur, Bo-Keun Ha and Bum-Soo Hahn
Agronomy 2022, 12(5), 1004; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy12051004 - 22 Apr 2022
Cited by 11 | Viewed by 2848
Abstract
Soybean (Glycine max) is a native field crop in Northeast Asia. The National Agrobiodiversity Center (NAC) in Korea has conserved approximately 26,000 soybean germplasm and distributed them to researchers and growers. The phenotype traits of soybean were investigated during periodic multiplication. [...] Read more.
Soybean (Glycine max) is a native field crop in Northeast Asia. The National Agrobiodiversity Center (NAC) in Korea has conserved approximately 26,000 soybean germplasm and distributed them to researchers and growers. The phenotype traits of soybean were investigated during periodic multiplication. However, it is time-consuming to collect sufficient data, especially on the width and height of seeds. During the last decade, the development of phenomics efficiently assisted the analysis of high-throughput phenotyping seed morphology. This study collected and analyzed seed morphological traits of 589 germplasm (53,909 seeds) from diverse origins using a digital camera and a computer-based seed phenotyping program. Measured traits included size and shape, 100-seed weight, height, width, perimeter, area, aspect ratio (AR), solidity, circularity, and roundness. The diversity of soybean germplasm seeds was analyzed based on 8-seed morphological traits and 100-seed weight, as determined by image phenotyping and direct weighting, respectively. The data obtained from 589 soybean germplasm were divided into five clusters by k-means clustering. Orthogonal projections to latent structures discriminant analysis (OPLS-DA) were performed to compare clusters. The major differences between clusters were in the order of area, perimeter, 100-seed weight, width, and height. Based on cultivar origins, the seed size of US origin was the largest, followed by Korea and China. We classified size, shape, and color according to the International Union for the Protection of New Varieties of Plants (UPOV) guidelines. In particular, we postulated that shape could be distinguished based on the AR and roundness values as secondary parameters. High-throughput phenotyping could make a decisive contribution to resolving the phenotyping bottleneck. In addition, rapid and accurate analysis of a large number of seed phenotypes will assist breeders and enhance agricultural competitiveness. Full article
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products)
Show Figures

Figure 1

10 pages, 1876 KiB  
Article
Mapping Gaps in Sugarcane by UAV RGB Imagery: The Lower and Earlier the Flight, the More Accurate
by Marcelo Rodrigues Barbosa Júnior, Danilo Tedesco, Rafael de Graaf Corrêa, Bruno Rafael de Almeida Moreira, Rouverson Pereira da Silva and Cristiano Zerbato
Agronomy 2021, 11(12), 2578; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy11122578 - 18 Dec 2021
Cited by 8 | Viewed by 3642
Abstract
Imagery data prove useful for mapping gaps in sugarcane. However, if the quality of data is poor or the moment of flying an aerial platform is not compatible to phenology, prediction becomes rather inaccurate. Therefore, we analyzed how the combination of pixel size [...] Read more.
Imagery data prove useful for mapping gaps in sugarcane. However, if the quality of data is poor or the moment of flying an aerial platform is not compatible to phenology, prediction becomes rather inaccurate. Therefore, we analyzed how the combination of pixel size (3.5, 6.0 and 8.2 cm) and height of plant (0.5, 0.9, 1.0, 1.2 and 1.7 m) could impact the mapping of gaps on unmanned aerial vehicle (UAV) RGB imagery. Both factors significantly influenced mapping. The larger the pixel or plant, the less accurate the prediction. Error was more likely to occur for regions on the field where actively growing vegetation overlapped at gaps of 0.5 m. Hence, even 3.5 cm pixel did not capture them. Overall, pixels of 3.5 cm and plants of 0.5 m outstripped other combinations, making it the most accurate (absolute error ~0.015 m) solution for remote mapping on the field. Our insights are timely and provide forward knowledge that is particularly relevant to progress in the field’s prominence of flying a UAV to map gaps. They will enable producers to make decisions on replanting and fertilizing site-specific high-resolution imagery data. Full article
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products)
Show Figures

Graphical abstract

12 pages, 2204 KiB  
Article
DiaMOS Plant: A Dataset for Diagnosis and Monitoring Plant Disease
by Gianni Fenu and Francesca Maridina Malloci
Agronomy 2021, 11(11), 2107; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy11112107 - 21 Oct 2021
Cited by 31 | Viewed by 9850
Abstract
The classification and recognition of foliar diseases is an increasingly developing field of research, where the concepts of machine and deep learning are used to support agricultural stakeholders. Datasets are the fuel for the development of these technologies. In this paper, we release [...] Read more.
The classification and recognition of foliar diseases is an increasingly developing field of research, where the concepts of machine and deep learning are used to support agricultural stakeholders. Datasets are the fuel for the development of these technologies. In this paper, we release and make publicly available the field dataset collected to diagnose and monitor plant symptoms, called DiaMOS Plant, consisting of 3505 images of pear fruit and leaves affected by four diseases. In addition, we perform a comparative analysis of existing literature datasets designed for the classification and recognition of leaf diseases, highlighting the main features that maximize the value and information content of the collected data. This study provides guidelines that will be useful to the research community in the context of the selection and construction of datasets. Full article
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products)
Show Figures

Figure 1

Review

Jump to: Research

31 pages, 3400 KiB  
Review
Computer Vision and Deep Learning for Precision Viticulture
by Lucas Mohimont, François Alin, Marine Rondeau, Nathalie Gaveau and Luiz Angelo Steffenel
Agronomy 2022, 12(10), 2463; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy12102463 - 11 Oct 2022
Cited by 17 | Viewed by 4108
Abstract
During the last decades, researchers have developed novel computing methods to help viticulturists solve their problems, primarily those linked to yield estimation of their crops. This article aims to summarize the existing research associated with computer vision and viticulture. It focuses on approaches [...] Read more.
During the last decades, researchers have developed novel computing methods to help viticulturists solve their problems, primarily those linked to yield estimation of their crops. This article aims to summarize the existing research associated with computer vision and viticulture. It focuses on approaches that use RGB images directly obtained from parcels, ranging from classic image analysis methods to Machine Learning, including novel Deep Learning techniques. We intend to produce a complete analysis accessible to everyone, including non-specialized readers, to discuss the recent progress of artificial intelligence (AI) in viticulture. To this purpose, we present work focusing on detecting grapevine flowers, grapes, and berries in the first sections of this article. In the last sections, we present different methods for yield estimation and the problems that arise with this task. Full article
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products)
Show Figures

Figure 1

Back to TopTop