Image Analysis Techniques in Agriculture

A special issue of Agriculture (ISSN 2077-0472). This special issue belongs to the section "Agricultural Technology".

Deadline for manuscript submissions: closed (30 November 2021) | Viewed by 84474

Special Issue Editors


E-Mail Website
Guest Editor
Department of Biosystems Engineering, Poznań University of Life Sciences, Poznan, Poland
Interests: computer image analysis; artificial neural networks; neural modeling; machine learning; deep learning; computer science in agriculture
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Co-Guest Editor
Department of Biosystems Engineering, Faculty of Environmental Engineering and Mechanical Engineering, Poznan University of Life Sciences, Poznan, Poland
Interests: modern agricultural equipment; use of agricultural machinery; postharvest technologies and process engineering; biomass energy; biosystems engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In modern agriculture, which is currently so-called Agriculture 4.0, the main emphasis is placed on the development of precise tools supporting all activities related to food production. Increasing attention is now being paid not only to the quantity but also, and above all, to the quality of agricultural products, as well as to waste and its characteristics. Computer image analysis, which in many respects supports precision farming, is becoming a widely used tool. It should be mentioned that the quality of individual products, such as cereals, vegetables, and fruits, should be assessed, but it is also possible to assess the condition of a field and plantation. These tools have an advisory function and in many cases are able to prevent losses to the agricultural producer. The 21st century is a century of information, including information encoded in graphic form, most often in the form of a digital image. Appropriate acquisition of material, and then determination of characteristics and production of methods that will directly or indirectly support the agricultural production, e.g., in the aspect of plant protection, etc. belongs among the broad interests of modern Agriculture 4.0.

Dr. Maciej Zaborowicz
Dr. Dawid Wojcieszak
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agriculture is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Imaging techniques in agriculture
  • Acquisition of the image of products of agricultural origin
  • Acquisition of characteristics
  • Image processing
  • Data processing
  • Image analysis
  • Neural image analysis
  • Product evaluation
  • Quality assessment
  • Safety of food products
  • Other agricultural topics (Image analysis)

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 4910 KiB  
Article
Estimation of Cultivated Land Quality Based on Soil Hyperspectral Data
by Chenjie Lin, Yueming Hu, Zhenhua Liu, Yiping Peng, Lu Wang and Dailiang Peng
Agriculture 2022, 12(1), 93; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture12010093 - 11 Jan 2022
Cited by 9 | Viewed by 2231
Abstract
Efficient monitoring of cultivated land quality (CLQ) plays a significant role in cultivated land protection. Soil spectral data can reflect the state of cultivated land. However, most studies have used crop spectral information to estimate CLQ, and there is little research on using [...] Read more.
Efficient monitoring of cultivated land quality (CLQ) plays a significant role in cultivated land protection. Soil spectral data can reflect the state of cultivated land. However, most studies have used crop spectral information to estimate CLQ, and there is little research on using soil spectral data for this purpose. In this study, soil hyperspectral data were utilized for the first time to evaluate CLQ. We obtained the optimal spectral variables from dry soil spectral data using a gradient boosting decision tree (GBDT) algorithm combined with the variance inflation factor (VIF). Two estimation algorithms (partial least-squares regression (PLSR) and back-propagation neural network (BPNN)) with 10-fold cross-validation were employed to develop the relationship model between the optimal spectral variables and CLQ. The optimal algorithms were determined by the degree of fit (determination coefficient, R2). In order to estimate CLQ at the regional scale, HuanJing-1A Hyperspectral Imager (HJ-1A HSI) data were transformed into dry soil spectral data using the linkage model of original soil spectral reflectance to dry soil spectral reflectance. This study was conducted in the Guangdong Province, China and the Conghua district within the same province. The results showed the following: (1) the optimal spectral variables selected from the dry soil spectral variables were 478 nm, 502 nm, 614 nm, 872 nm, 966 nm, 1007 nm, and 1796 nm. (2) The BPNN was the optimal model, with an R2(C) of 0.71 and a normalized root mean square error (NRMSE) of 12.20%. (3) The results showed the R2 of the regional-scale CLQ estimation based on the proposed method was 0.05 higher, and the NRMSE was 0.92% lower than that of the CLQ map obtained using the traditional method. Additionally, the NRMSE of the regional-scale CLQ estimation base on dry soil spectral variables from HJ-1A HSI data was 2.00% lower than that of the model base on the original HJ-1A HSI data. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

15 pages, 3870 KiB  
Article
Image Recognition of Male Oilseed Rape (Brassica napus) Plants Based on Convolutional Neural Network for UAAS Navigation Applications on Supplementary Pollination and Aerial Spraying
by Zhu Sun, Xiangyu Guo, Yang Xu, Songchao Zhang, Xiaohui Cheng, Qiong Hu, Wenxiang Wang and Xinyu Xue
Agriculture 2022, 12(1), 62; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture12010062 - 05 Jan 2022
Cited by 6 | Viewed by 1741
Abstract
To ensure the hybrid oilseed rape (OSR, Brassica napus) seed production, two important things are necessary, the stamen sterility on the female OSR plants and the effective pollen spread onto the pistil from the OSR male plants to the OSR female plants. [...] Read more.
To ensure the hybrid oilseed rape (OSR, Brassica napus) seed production, two important things are necessary, the stamen sterility on the female OSR plants and the effective pollen spread onto the pistil from the OSR male plants to the OSR female plants. The unmanned agricultural aerial system (UAAS) has developed rapidly in China. It has been used on supplementary pollination and aerial spraying during the hybrid OSR seed production. This study developed a new method to rapidly recognize the male OSR plants and extract the row center line for supporting the UAAS navigation. A male OSR plant recognition model was constructed based on the convolutional neural network (CNN). The sequence images of male OSR plants were extracted, the feature regions and points were obtained from the images through morphological and boundary process methods and horizontal segmentation, respectively. The male OSR plant image recognition accuracies of different CNN structures and segmentation sizes were discussed. The male OSR plant row center lines were fitted using the least-squares method (LSM) and Hough transform. The results showed that the segmentation algorithm could segment the male OSR plants from the complex background. The highest average recognition accuracy was 93.54%, and the minimum loss function value was 0.2059 with three convolutional layers, one fully connected layer, and a segmentation size of 40 pix × 40 pix. The LSM is better for center line fitting. The average recognition model accuracies of original input images were 98% and 94%, and the average root mean square errors (RMSE) of angle were 3.22° and 1.36° under cloudy day and sunny day lighting conditions, respectively. The results demonstrate the potential of using digital imaging technology to recognize the male OSR plant row for UAAS visual navigation on the applications of hybrid OSR supplementary pollination and aerial spraying, which would be a meaningful supplement in precision agriculture. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

17 pages, 8218 KiB  
Article
A Handheld Grassland Vegetation Monitoring System Based on Multispectral Imaging
by Aiwu Zhang, Shaoxing Hu, Xizhen Zhang, Taipei Zhang, Mengnan Li, Haiyu Tao and Yan Hou
Agriculture 2021, 11(12), 1262; https://doi.org/10.3390/agriculture11121262 - 13 Dec 2021
Cited by 7 | Viewed by 2500
Abstract
Monitoring grassland vegetation growth is of vital importance to scientific grazing and grassland management. People expect to be able to use a portable device, like a mobile phone, to monitor grassland vegetation growth at any time. In this paper, we propose a handheld [...] Read more.
Monitoring grassland vegetation growth is of vital importance to scientific grazing and grassland management. People expect to be able to use a portable device, like a mobile phone, to monitor grassland vegetation growth at any time. In this paper, we propose a handheld grassland vegetation monitoring system to achieve the goal of monitoring grassland vegetation growth. The system includes two parts: the hardware unit is a hand-held multispectral imaging tool named ASQ-Discover based on a smartphone, which has six bands (wavelengths)—including three visible bands (450 nm, 550 nm, 650 nm), a red-edge band (750 nm), and two near-infrared bands (850 nm, 960 nm). The imagery data of each band has a size of 5120 × 3840 pixels with 8-bit depth. The software unit improves image quality through vignetting removal, radiometric calibration, and misalignment correction and estimates and analyzes spectral traits of grassland vegetation (Fresh Grass Ratio (FGR), NDVI, NDRE, BNDVI, GNDVI, OSAVI and TGI) that are indicators of vegetation growth in grassland. We introduce the hardware and software unit in detail, and we also experiment in five pastures located in Haiyan County, Qinghai Province. Our experimental results show that the handheld grassland vegetation growth monitoring system has the potential to revolutionize the grassland monitoring that operators can conduct when using a hand-held tool to achieve the tasks of grassland vegetation growth monitoring. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

15 pages, 9775 KiB  
Article
Study on Plant Growth and Nutrient Uptake under Different Aeration Intensity in Hydroponics with the Application of Particle Image Velocimetry
by Bateer Baiyin, Kotaro Tagawa, Mina Yamada, Xinyan Wang, Satoshi Yamada, Sadahiro Yamamoto and Yasuomi Ibaraki
Agriculture 2021, 11(11), 1140; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture11111140 - 14 Nov 2021
Cited by 10 | Viewed by 4263
Abstract
Aeration is considered beneficial for hydroponics. However, little information is available on the effects of aeration, and even less on solutions that use bubble flow and their agronomic effects. In this study, the effects of aeration intensity on plants were studied through cultivation [...] Read more.
Aeration is considered beneficial for hydroponics. However, little information is available on the effects of aeration, and even less on solutions that use bubble flow and their agronomic effects. In this study, the effects of aeration intensity on plants were studied through cultivation experiments and flow field visualization. It was found that the growth of plants did not increase linearly with an increase in aeration intensity. From the results of this study, when the aeration intensity was within the low range (0.07–0.15 L·L−1 NS·min−1), increasing the aeration intensity increased the plant growth. However, after the aeration intensity reached a certain extent (0.15–1.18 L·L−1 NS·min−1), some indicators did not change significantly. When the aeration intensity continued to increase (1.18–2.35 L·L−1 NS·min−1), growth began to decrease. These results show that for increasing dissolved oxygen and promoting plant growth, the rule is not “the higher the aeration intensity, the better”. There is a reasonable range of aeration intensity within which crops grow normally and rapidly. In addition, increasing the aeration intensity means increasing energy utilization and operating costs. In actual hydroponics production, it is very important to find a reasonable aeration intensity range. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

13 pages, 7777 KiB  
Article
Semi-Automated Ground Truth Segmentation and Phenotyping of Plant Structures Using k-Means Clustering of Eigen-Colors (kmSeg)
by Michael Henke, Kerstin Neumann, Thomas Altmann and Evgeny Gladilin
Agriculture 2021, 11(11), 1098; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture11111098 - 04 Nov 2021
Cited by 4 | Viewed by 2509
Abstract
Background. Efficient analysis of large image data produced in greenhouse phenotyping experiments is often challenged by a large variability of optical plant and background appearance which requires advanced classification model methods and reliable ground truth data for their training. In the absence [...] Read more.
Background. Efficient analysis of large image data produced in greenhouse phenotyping experiments is often challenged by a large variability of optical plant and background appearance which requires advanced classification model methods and reliable ground truth data for their training. In the absence of appropriate computational tools, generation of ground truth data has to be performed manually, which represents a time-consuming task. Methods. Here, we present a efficient GUI-based software solution which reduces the task of plant image segmentation to manual annotation of a small number of image regions automatically pre-segmented using k-means clustering of Eigen-colors (kmSeg). Results. Our experimental results show that in contrast to other supervised clustering techniques k-means enables a computationally efficient pre-segmentation of large plant images in their original resolution. Thereby, the binary segmentation of plant images in fore- and background regions is performed within a few minutes with the average accuracy of 96–99% validated by a direct comparison with ground truth data. Conclusions. Primarily developed for efficient ground truth segmentation and phenotyping of greenhouse-grown plants, the kmSeg tool can be applied for efficient labeling and quantitative analysis of arbitrary images exhibiting distinctive differences between colors of fore- and background structures. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

16 pages, 14823 KiB  
Article
Review on Multitemporal Classification Methods of Satellite Images for Crop and Arable Land Recognition
by Joanna Pluto-Kossakowska
Agriculture 2021, 11(10), 999; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture11100999 - 13 Oct 2021
Cited by 16 | Viewed by 3231
Abstract
This paper presents a review of the conducted research in the field of multitemporal classification methods used for the automatic identification of crops and arable land using optical satellite images. The review and systematization of these methods in terms of the effectiveness of [...] Read more.
This paper presents a review of the conducted research in the field of multitemporal classification methods used for the automatic identification of crops and arable land using optical satellite images. The review and systematization of these methods in terms of the effectiveness of the obtained results and their accuracy allows for the planning towards further development in this area. The state of the art analysis concerns various methodological approaches, including selection of data in terms of spatial resolution, selection of algorithms, as well as external conditions related to arable land use, especially the structure of crops. The results achieved with use of various approaches and classifiers and subsequently reported in the literature vary depending on the crops and area of analysis and the sources of satellite data. Hence, their review and systematic conclusions are needed, especially in the context of the growing interest in automatic processes of identifying crops for statistical purposes or monitoring changes in arable land. The results of this study show no significant difference between the accuracy achieved from different machine learning algorithms, yet on average artificial neural network classifiers have results that are better by a few percent than others. For very fragmented regions, better results were achieved using Sentinel-2, SPOT-5 rather than Landsat images, but the level of accuracy can still be improved. For areas with large plots there is no difference in the level of accuracy achieved from any HR images. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

18 pages, 4297 KiB  
Article
Potato Surface Defect Detection Based on Deep Transfer Learning
by Chenglong Wang and Zhifeng Xiao
Agriculture 2021, 11(9), 863; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture11090863 - 10 Sep 2021
Cited by 20 | Viewed by 4008
Abstract
Food defect detection is crucial for the automation of food production and processing. Potato surface defect detection remains challenging due to the irregular shape of potato individuals and various types of defects. This paper employs deep convolutional neural network (DCNN) models for potato [...] Read more.
Food defect detection is crucial for the automation of food production and processing. Potato surface defect detection remains challenging due to the irregular shape of potato individuals and various types of defects. This paper employs deep convolutional neural network (DCNN) models for potato surface defect detection. In particular, we applied transfer learning by fine-tuning a base model through three DCNN models—SSD Inception V2, RFCN ResNet101, and Faster RCNN ResNet101—on a self-developed dataset, and achieved an accuracy of 92.5%, 95.6%, and 98.7%, respectively. RFCN ResNet101 presented the best overall performance in detection speed and accuracy. It was selected as the final model for out-of-sample testing, further demonstrating the model’s ability to generalize. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

28 pages, 3761 KiB  
Article
A Comparative Study of Various Methods for Handling Missing Data in UNSODA
by Yingpeng Fu, Hongjian Liao and Longlong Lv
Agriculture 2021, 11(8), 727; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture11080727 - 30 Jul 2021
Cited by 6 | Viewed by 3221
Abstract
UNSODA, a free international soil database, is very popular and has been used in many fields. However, missing soil property data have limited the utility of this dataset, especially for data-driven models. Here, three machine learning-based methods, i.e., random forest (RF) regression, support [...] Read more.
UNSODA, a free international soil database, is very popular and has been used in many fields. However, missing soil property data have limited the utility of this dataset, especially for data-driven models. Here, three machine learning-based methods, i.e., random forest (RF) regression, support vector (SVR) regression, and artificial neural network (ANN) regression, and two statistics-based methods, i.e., mean and multiple imputation (MI), were used to impute the missing soil property data, including pH, saturated hydraulic conductivity (SHC), organic matter content (OMC), porosity (PO), and particle density (PD). The missing upper depths (DU) and lower depths (DL) for the sampling locations were also imputed. Before imputing the missing values in UNSODA, a missing value simulation was performed and evaluated quantitatively. Next, nonparametric tests and multiple linear regression were performed to qualitatively evaluate the reliability of these five imputation methods. Results showed that RMSEs and MAEs of all features fluctuated within acceptable ranges. RF imputation and MI presented the lowest RMSEs and MAEs; both methods are good at explaining the variability of data. The standard error, coefficient of variance, and standard deviation decreased significantly after imputation, and there were no significant differences before and after imputation. Together, DU, pH, SHC, OMC, PO, and PD explained 91.0%, 63.9%, 88.5%, 59.4%, and 90.2% of the variation in BD using RF, SVR, ANN, mean, and MI, respectively; and this value was 99.8% when missing values were discarded. This study suggests that the RF and MI methods may be better for imputing the missing data in UNSODA. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

23 pages, 16460 KiB  
Article
Disease Detection in Apple Leaves Using Deep Convolutional Neural Network
by Prakhar Bansal, Rahul Kumar and Somesh Kumar
Agriculture 2021, 11(7), 617; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture11070617 - 30 Jun 2021
Cited by 91 | Viewed by 12016
Abstract
The automatic detection of diseases in plants is necessary, as it reduces the tedious work of monitoring large farms and it will detect the disease at an early stage of its occurrence to minimize further degradation of plants. Besides the decline of plant [...] Read more.
The automatic detection of diseases in plants is necessary, as it reduces the tedious work of monitoring large farms and it will detect the disease at an early stage of its occurrence to minimize further degradation of plants. Besides the decline of plant health, a country’s economy is highly affected by this scenario due to lower production. The current approach to identify diseases by an expert is slow and non-optimal for large farms. Our proposed model is an ensemble of pre-trained DenseNet121, EfficientNetB7, and EfficientNet NoisyStudent, which aims to classify leaves of apple trees into one of the following categories: healthy, apple scab, apple cedar rust, and multiple diseases, using its images. Various Image Augmentation techniques are included in this research to increase the dataset size, and subsequentially, the model’s accuracy increases. Our proposed model achieves an accuracy of 96.25% on the validation dataset. The proposed model can identify leaves with multiple diseases with 90% accuracy. Our proposed model achieved a good performance on different metrics and can be deployed in the agricultural domain to identify plant health accurately and timely. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

15 pages, 2047 KiB  
Article
Classification of Grain Storage Inventory Modes Based on Temperature Contour Map of Grain Bulk Using Back Propagation Neural Network
by Hongwei Cui, Qiang Zhang, Jinsong Zhang, Zidan Wu and Wenfu Wu
Agriculture 2021, 11(5), 451; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture11050451 - 16 May 2021
Cited by 5 | Viewed by 3009
Abstract
Inventory modes classification can reduce the workload of grain depot management and it is time-saving, not labor-intensive. This paper proposed a method of using a temperature contour map converted from digital temperature data to classify stored grain inventory modes in a large bulk [...] Read more.
Inventory modes classification can reduce the workload of grain depot management and it is time-saving, not labor-intensive. This paper proposed a method of using a temperature contour map converted from digital temperature data to classify stored grain inventory modes in a large bulk grain warehouse, which mainly included detection of inventory changes and routine operations performed (aeration). The back propagation (BP) neural network was used in this method to identify and classify grain storage inventory modes based on the temperature contour map for helping grain depot management work. The method extracted and combined color coherence vector (CCV), texture feature vector (TFV) and smoothness feature vector (SFV) of temperature contour maps as the input vector of the BP neural network, and used inventory modes as the output vector. The experimental results indicated that the accuracy of the BP neural network with vector (CCV and TFV and SFV) as the input vector was about 93.9%, and its training time and prediction time were 320 and 0.12 s, respectively. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Graphical abstract

16 pages, 12255 KiB  
Article
3D Point Cloud on Semantic Information for Wheat Reconstruction
by Yuhang Yang, Jinqian Zhang, Kangjie Wu, Xixin Zhang, Jun Sun, Shuaibo Peng, Jun Li and Mantao Wang
Agriculture 2021, 11(5), 450; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture11050450 - 16 May 2021
Cited by 8 | Viewed by 3258
Abstract
Phenotypic analysis has always played an important role in breeding research. At present, wheat phenotypic analysis research mostly relies on high-precision instruments, which make the cost higher. Thanks to the development of 3D reconstruction technology, the reconstructed wheat 3D model can also be [...] Read more.
Phenotypic analysis has always played an important role in breeding research. At present, wheat phenotypic analysis research mostly relies on high-precision instruments, which make the cost higher. Thanks to the development of 3D reconstruction technology, the reconstructed wheat 3D model can also be used for phenotypic analysis. In this paper, a method is proposed to reconstruct wheat 3D model based on semantic information. The method can generate the corresponding 3D point cloud model of wheat according to the semantic description. First, an object detection algorithm is used to detect the characteristics of some wheat phenotypes during the growth process. Second, the growth environment information and some phenotypic features of wheat are combined into semantic information. Third, text-to-image algorithm is used to generate the 2D image of wheat. Finally, the wheat in the 2D image is transformed into an abstract 3D point cloud and obtained a higher precision point cloud model using a deep learning algorithm. Extensive experiments indicate that the method reconstructs 3D models and has a heuristic effect on phenotypic analysis and breeding research by deep learning. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Graphical abstract

13 pages, 12132 KiB  
Article
Classification of Amanita Species Based on Bilinear Networks with Attention Mechanism
by Peng Wang, Jiang Liu, Lijia Xu, Peng Huang, Xiong Luo, Yan Hu and Zhiliang Kang
Agriculture 2021, 11(5), 393; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture11050393 - 26 Apr 2021
Cited by 14 | Viewed by 2795
Abstract
The accurate classification of Amanita is helpful to its research on biological control and medical value, and it can also prevent mushroom poisoning incidents. In this paper, we constructed the Bilinear convolutional neural networks (B-CNN) with attention mechanism model based on transfer learning [...] Read more.
The accurate classification of Amanita is helpful to its research on biological control and medical value, and it can also prevent mushroom poisoning incidents. In this paper, we constructed the Bilinear convolutional neural networks (B-CNN) with attention mechanism model based on transfer learning to realize the classification of Amanita. When the model is trained, the weight on ImageNet is used for pre-training, and the Adam optimizer is used to update network parameters. In the test process, images of Amanita at different growth stages were used to further test the generalization ability of the model. After comparing our model with other models, the results show that our model greatly reduces the number of parameters while achieving high accuracy (95.2%) and has good generalization ability. It is an efficient classification model, which provides a new option for mushroom classification in areas with limited computing resources. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

12 pages, 2185 KiB  
Article
Assessment of the Content of Dry Matter and Dry Organic Matter in Compost with Neural Modelling Methods
by Dawid Wojcieszak, Maciej Zaborowicz, Jacek Przybył, Piotr Boniecki and Aleksander Jędruś
Agriculture 2021, 11(4), 307; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture11040307 - 01 Apr 2021
Cited by 7 | Viewed by 2927
Abstract
Neural image analysis is commonly used to solve scientific problems of biosystems and mechanical engineering. The method has been applied, for example, to assess the quality of foodstuffs such as fruit and vegetables, cereal grains, and meat. The method can also be used [...] Read more.
Neural image analysis is commonly used to solve scientific problems of biosystems and mechanical engineering. The method has been applied, for example, to assess the quality of foodstuffs such as fruit and vegetables, cereal grains, and meat. The method can also be used to analyse composting processes. The scientific problem lets us formulate the research hypothesis: it is possible to identify representative traits of the image of composted material that are necessary to create a neural model supporting the process of assessment of the content of dry matter and dry organic matter in composted material. The effect of the research is the identification of selected features of the composted material and the methods of neural image analysis resulted in a new original method enabling effective assessment of the content of dry matter and dry organic matter. The content of dry matter and dry organic matter can be analysed by means of parameters specifying the colour of compost. The best developed neural models for the assessment of the content of dry matter and dry organic matter in compost are: in visible light RBF 19:19-2-1:1 (test error 0.0922) and MLP 14:14-14-11-1:1 (test error 0.1722), in mixed light RBF 30:30-8-1:1 (test error 0.0764) and MLP 7:7-9-7-1:1 (test error 0.1795). The neural models generated for the compost images taken in mixed light had better qualitative characteristics. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

18 pages, 6864 KiB  
Article
Multi-Feature Patch-Based Segmentation Technique in the Gray-Centered RGB Color Space for Improved Apple Target Recognition
by Pan Fan, Guodong Lang, Pengju Guo, Zhijie Liu, Fuzeng Yang, Bin Yan and Xiaoyan Lei
Agriculture 2021, 11(3), 273; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture11030273 - 22 Mar 2021
Cited by 20 | Viewed by 3277
Abstract
In the vision system of apple-picking robots, the main challenge is to rapidly and accurately identify the apple targets with varying halation and shadows on their surfaces. To solve this problem, this study proposes a novel, multi-feature, patch-based apple image segmentation technique using [...] Read more.
In the vision system of apple-picking robots, the main challenge is to rapidly and accurately identify the apple targets with varying halation and shadows on their surfaces. To solve this problem, this study proposes a novel, multi-feature, patch-based apple image segmentation technique using the gray-centered red-green-blue (RGB) color space. The developed method presents a multi-feature selection process, which eliminates the effect of halation and shadows in apple images. By exploring all the features of the image, including halation and shadows, in the gray-centered RGB color space, the proposed algorithm, which is a generalization of K-means clustering algorithm, provides an efficient target segmentation result. The proposed method is tested on 240 apple images. It offered an average accuracy rate of 98.79%, a recall rate of 99.91%, an F1 measure of 99.35%, a false positive rate of 0.04%, and a false negative rate of 1.18%. Compared with the classical segmentation methods and conventional clustering algorithms, as well as the popular deep-learning segmentation algorithms, the proposed method can perform with high efficiency and accuracy to guide robotic harvesting. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Graphical abstract

9 pages, 961 KiB  
Article
Identification Process of Selected Graphic Features Apple Tree Pests by Neural Models Type MLP, RBF and DNN
by Piotr Boniecki, Maciej Zaborowicz, Agnieszka Pilarska and Hanna Piekarska-Boniecka
Agriculture 2020, 10(6), 218; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture10060218 - 10 Jun 2020
Cited by 9 | Viewed by 2523
Abstract
In this paper, the classification capabilities of perceptron and radial neural networks are compared using the identification of selected pests feeding in apple tree orchards in Poland as an example. The goal of the study was the neural separation of five selected apple [...] Read more.
In this paper, the classification capabilities of perceptron and radial neural networks are compared using the identification of selected pests feeding in apple tree orchards in Poland as an example. The goal of the study was the neural separation of five selected apple tree orchard pests. The classification was based on graphical information coded as selected characteristic features of the pests, presented in digital images. In the paper, MLP (MultiLayer Perceptrons), RBF (Radial Basis Function) and DNN (Deep Neural Networks) neural classification models are compared, generated using learning files acquired on the basis of information contained in digital photographs of five selected pests. In order to classify the pests, neural modeling methods were used, including digital image analysis techniques. The qualitative analysis of the neural models enabled the selection of optimal neuron topology that was characterized by the highest classification capability. As representative graphic features were selected five selected coefficients of shape and two defined graphical features of the classified objects. The created neuron model is dedicated as a core for computer systems supporting the decision processes occurring during apple production, particularly in the context of apple tree orchard pest protection automation. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

14 pages, 7515 KiB  
Article
Robust Cherry Tomatoes Detection Algorithm in Greenhouse Scene Based on SSD
by Ting Yuan, Lin Lv, Fan Zhang, Jun Fu, Jin Gao, Junxiong Zhang, Wei Li, Chunlong Zhang and Wenqiang Zhang
Agriculture 2020, 10(5), 160; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture10050160 - 09 May 2020
Cited by 31 | Viewed by 3859
Abstract
The detection of cherry tomatoes in greenhouse scene is of great significance for robotic harvesting. This paper states a method based on deep learning for cherry tomatoes detection to reduce the influence of illumination, growth difference, and occlusion. In view of such greenhouse [...] Read more.
The detection of cherry tomatoes in greenhouse scene is of great significance for robotic harvesting. This paper states a method based on deep learning for cherry tomatoes detection to reduce the influence of illumination, growth difference, and occlusion. In view of such greenhouse operating environment and accuracy of deep learning, Single Shot multi-box Detector (SSD) was selected because of its excellent anti-interference ability and self-taught from datasets. The first step is to build datasets containing various conditions in greenhouse. According to the characteristics of cherry tomatoes, the image samples with illumination change, images rotation and noise enhancement were used to expand the datasets. Then training datasets were used to train and construct network model. To study the effect of base network and the input size of networks, one contrast experiment was designed on different base networks of VGG16, MobileNet, Inception V2 networks, and the other contrast experiment was conducted on changing the network input image size of 300 pixels by 300 pixels, 512 pixels by 512 pixels. Through the analysis of the experimental results, it is found that the Inception V2 network is the best base network with the average precision of 98.85% in greenhouse environment. Compared with other detection methods, this method shows substantial improvement in cherry tomatoes detection. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

11 pages, 1170 KiB  
Article
Quality Evaluation of Potato Tubers Using Neural Image Analysis Method
by Andrzej Przybylak, Radosław Kozłowski, Ewa Osuch, Andrzej Osuch, Piotr Rybacki and Przemysław Przygodziński
Agriculture 2020, 10(4), 112; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture10040112 - 04 Apr 2020
Cited by 12 | Viewed by 2985
Abstract
This paper describes the research aimed at developing an effective quality assessment method for potato tubers using neural image analysis techniques. Nowadays, the methods used to identify damage and diseases are time-consuming, require specialized knowledge, and often rely on subjective judgment. This study [...] Read more.
This paper describes the research aimed at developing an effective quality assessment method for potato tubers using neural image analysis techniques. Nowadays, the methods used to identify damage and diseases are time-consuming, require specialized knowledge, and often rely on subjective judgment. This study showed the use of the developed neural model as a tool supporting the evaluation of potato tubers during the sorting process in the storage room. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 1396 KiB  
Review
Automatic Detection and Monitoring of Insect Pests—A Review
by Matheus Cardim Ferreira Lima, Maria Elisa Damascena de Almeida Leandro, Constantino Valero, Luis Carlos Pereira Coronel and Clara Oliva Gonçalves Bazzo
Agriculture 2020, 10(5), 161; https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture10050161 - 09 May 2020
Cited by 128 | Viewed by 21120
Abstract
Many species of insect pests can be detected and monitored automatically. Several systems have been designed in order to improve integrated pest management (IPM) in the context of precision agriculture. Automatic detection traps have been developed for many important pests. These techniques and [...] Read more.
Many species of insect pests can be detected and monitored automatically. Several systems have been designed in order to improve integrated pest management (IPM) in the context of precision agriculture. Automatic detection traps have been developed for many important pests. These techniques and new technologies are very promising for the early detection and monitoring of aggressive and quarantine pests. The aim of the present paper is to review the techniques and scientific state of the art of the use of sensors for automatic detection and monitoring of insect pests. The paper focuses on the methods for identification of pests based in infrared sensors, audio sensors and image-based classification, presenting the different systems available, examples of applications and recent developments, including machine learning and Internet of Things. Future trends of automatic traps and decision support systems are also discussed. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

Back to TopTop