Research Status, Progress, and Applications of Agricultural Robot and Agriculture 4.0 Technologies in Field Operation—Volume II

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Precision and Digital Agriculture".

Deadline for manuscript submissions: 30 April 2024 | Viewed by 8232

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mechanical Engineering, Shinshu University, 3 Chome-1-1 Asahi, Matsumoto, Japan
Interests: agricultural robots; agricultural mechanics; machine vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Agricultural Engineering and Technology, University of Agriculture, Faisalabad 38000, Pakistan
Interests: smart agriculture; agricultural robots; machine vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Mechanical and Electronic Engineering, Northwest A & F University, Yangling 712100, China
Interests: smart agriculture; fruit robotic harvesting; 2D/3D image processing; multispectral/hyperspectral imaging; spectroscopy; machine learning; deep learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the rapid increase in the world population, agriculture first moved on from manual operation to mechanization and is now moving toward the implementation of automated/robotic field operations to meet the growing demand for food. However, in contrast to industrial operations, agricultural field operations are complex, unstructured, ill defined, and subject to a high degree of variation in illumination, atmospheric, and landscape conditions in addition to the dynamic biological nature of both field and specialty crops. These challenges make it extremely difficult to implement automated/robotic solutions in agricultural field operations. However, with the technological advancements in GPS (global positioning systems), smart sensors, UAVs (unmanned aerial vehicles), GIS (geographic information systems), the IoT (Internet of Things), machine vision, artificial intelligence, blockchain, big data, cybernetics, nanotechnology, digital agriculture, precision agriculture, smart decision support systems, advanced control system, etc., the use of automated/robotic operations in agriculture is becoming a reality.

The aim of this issue is to collect outstanding articles focusing on (but not limited to) robot solutions for various field operations (e.g., planting, irrigation, path planning and navigation, fertilization, spraying, canopy management, pollination, thinning, pruning, weed removal, precision crop load management, harvesting, postharvest transportation, and storage) for both field and specialty crops, precision agriculture applications, advanced in-field sensing and decision support systems, machine vision, artificial intelligence, deep learning, machine learning, big data, IoT, cybernetics, nanotechnology, digital agriculture, UAVs (unmanned aerial vehicles), mechatronics, smart sensors, swarm robotics, and nanorobotics applications in agriculture. 

Dr. Chao Chen
Dr. Satoru Sakai
Dr. Yaqoob Majeed
Prof. Dr. Longsheng Fu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • robotics for field and specialty crops
  • agricultural automation
  • machine vision
  • deep learning
  • machine learning
  • advanced in-field sensing and decision support systems
  • swarm robotics
  • nanorobotics
  • smart sensors
  • precision agriculture
  • digital agriculture
  • agriculture 4.0
  • instrumentation
  • big data
  • cybernetics
  • SLAM (simultaneous localization and mapping)
  • ICT applications
  • IoT in agriculture

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 9068 KiB  
Article
YOLO-BLBE: A Novel Model for Identifying Blueberry Fruits with Different Maturities Using the I-MSRCR Method
by Chenglin Wang, Qiyu Han, Jianian Li, Chunjiang Li and Xiangjun Zou
Agronomy 2024, 14(4), 658; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy14040658 - 24 Mar 2024
Viewed by 600
Abstract
Blueberry is among the fruits with high economic gains for orchard farmers. Identification of blueberry fruits with different maturities has economic significance to help orchard farmers plan pesticide application, estimate yield, and conduct harvest operations efficiently. Vision systems for automated orchard yield estimation [...] Read more.
Blueberry is among the fruits with high economic gains for orchard farmers. Identification of blueberry fruits with different maturities has economic significance to help orchard farmers plan pesticide application, estimate yield, and conduct harvest operations efficiently. Vision systems for automated orchard yield estimation have received growing attention toward fruit identification with different maturity stages. However, due to interfering factors such as varying outdoor illuminations, similar colors with the surrounding canopy, imaging distance, and occlusion in natural environments, it remains a serious challenge to develop reliable visual methods for identifying blueberry fruits with different maturities. This study constructed a YOLO-BLBE (Blueberry) model combined with an innovative I-MSRCR (Improved MSRCR (Multi-Scale Retinex with Color Restoration)) method to accurately identify blueberry fruits with different maturities. The color feature of blueberry fruit in the original image was enhanced by the I-MSRCR algorithm, which was improved based on the traditional MSRCR algorithm by adjusting the proportion of color restoration factors. The GhostNet model embedded by the CA (coordinate attention) mechanism module replaced the original backbone network of the YOLOv5s model to form the backbone of the YOLO-BLBE model. The BIFPN (Bidirectional Feature Pyramid Network) structure was applied in the neck network of the YOLO-BLBE model, and Alpha-EIOU was used as the loss function of the model to determine and filter candidate boxes. The main contributions of this study are as follows: (1) The I-MSRCR algorithm proposed in this paper can effectively amplify the color differences between blueberry fruits of different maturities. (2) Adding the synthesized blueberry images processed by the I-MSRCR algorithm to the training set for training can improve the model’s recognition accuracy for blueberries of different maturity levels. (3) The YOLO-BLBE model achieved an average identification accuracy of 99.58% for mature blueberry fruits, 96.77% for semi-mature blueberry fruits, and 98.07% for immature blueberry fruits. (4) The YOLO-BLBE model had a size of 12.75 MB and an average detection speed of 0.009 s. Full article
Show Figures

Figure 1

20 pages, 8671 KiB  
Article
A Lightweight, Secure Authentication Model for the Smart Agricultural Internet of Things
by Fei Pan, Boda Zhang, Xiaoyu Zhao, Luyu Shuai, Peng Chen and Xuliang Duan
Agronomy 2023, 13(9), 2257; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy13092257 - 28 Aug 2023
Cited by 1 | Viewed by 1032
Abstract
The advancement of smart agriculture, with information technology serving as a pivotal enabling factor, plays a crucial role in achieving food security, optimizing production efficiency, and preserving the environment. Simultaneously, wireless communication technology holds a critical function within the context of applying the [...] Read more.
The advancement of smart agriculture, with information technology serving as a pivotal enabling factor, plays a crucial role in achieving food security, optimizing production efficiency, and preserving the environment. Simultaneously, wireless communication technology holds a critical function within the context of applying the Internet of Things in agriculture. In this research endeavor, we present an algorithm for lightweight channel authentication based on frequency-domain feature extraction. This algorithm aims to distinguish between authentic transmitters and unauthorized ones in the wireless communication context of a representative agricultural setting. To accomplish this, we compiled a dataset comprising legitimate and illegitimate communication channels observed in both indoor and outdoor scenarios, which are typical in the context of smart agriculture. Leveraging its exceptional perceptual capabilities and advantages in parallel computing, the Transformer has injected fresh vitality into the realm of signal processing. Consequently, we opted for the lightweight MobileViT as our foundational model and designed a frequency-domain feature extraction module to augment MobileViT’s capabilities in signal processing. During the validation phase, we conducted a side-by-side comparison with currently outstanding ViT models in terms of convergence speed, precision, and performance parameters. Our model emerged as the frontrunner across all aspects, with FDFE-MobileViT achieving precision, recall, and F-score rates of 96.6%, 95.6%, and 96.1%, respectively. Additionally, the model maintains a compact size of 4.04 MB. Through comprehensive experiments, our proposed method was rigorously verified as a lighter, more efficient, and more accurate solution. Full article
Show Figures

Figure 1

19 pages, 5373 KiB  
Article
Blueberry Ripeness Detection Model Based on Enhanced Detail Feature and Content-Aware Reassembly
by Wenji Yang, Xinxin Ma and Hang An
Agronomy 2023, 13(6), 1613; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy13061613 - 15 Jun 2023
Cited by 1 | Viewed by 1360
Abstract
Blueberries have high nutritional and economic value and are easy to cultivate, so they are common fruit crops in China. There is a high demand for blueberry in domestic and foreign markets, and various technologies have been used to extend the supply cycle [...] Read more.
Blueberries have high nutritional and economic value and are easy to cultivate, so they are common fruit crops in China. There is a high demand for blueberry in domestic and foreign markets, and various technologies have been used to extend the supply cycle of blueberry to about 7 months. However, blueberry grows in clusters, and a cluster of fruits generally contains fruits of different degrees of maturity, which leads to low efficiency in manually picking mature fruits, and at the same time wastes a lot of manpower and material resources. Therefore, in order to improve picking efficiency, it is necessary to adopt an automated harvesting mode. However, an accurate maturity detection model can provide a prerequisite for automated harvesting technology. Therefore, this paper proposes a blueberry ripeness detection model based on enhanced detail feature and content-aware reassembly. First of all, this paper designs an EDFM (Enhanced Detail Feature Module) that improves the ability of detail feature extraction so that the model focuses on important features such as blueberry color and texture, which improves the model’s ability to extract blueberry features. Second, by adding the RFB (Receptive Field Block) module to the model, the lack of the model in terms of receptive field can be improved, and the calculation amount of the model can be reduced at the same time. Then, by using the Space-to-depth operation to redesign the MP (MaxPool) module, a new MP-S (MaxPool–Space to depth) module is obtained, which can effectively learn more feature information. Finally, an efficient upsampling method, the CARAFE (Content-Aware Reassembly of Features) module, is used, which can aggregate contextual information within a larger receptive field to improve the detection performance of the model. In order to verify the effectiveness of the method proposed in this paper, experiments were carried out on the self-made dataset “Blueberry—Five Datasets” which consists of data on five different maturity levels of blueberry with a total of 10,000 images. Experimental results show that the mAP (mean average precision) of the proposed network reaches 80.7%, which is 3.2% higher than that of the original network, and has better performance than other existing target detection network models. The proposed model can meet the needs of automatic blueberry picking. Full article
Show Figures

Figure 1

17 pages, 6608 KiB  
Article
A Refined Apple Binocular Positioning Method with Segmentation-Based Deep Learning for Robotic Picking
by Huijun Zhang, Chunhong Tang, Xiaoming Sun and Longsheng Fu
Agronomy 2023, 13(6), 1469; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy13061469 - 25 May 2023
Cited by 5 | Viewed by 1260
Abstract
An apple-picking robot is now the most widely accepted method in the substitution of low-efficiency and high-cost labor-intensive apple harvesting. Although most current research on apple-picking robots works well in the laboratory, most of them are unworkable in an orchard environment due to [...] Read more.
An apple-picking robot is now the most widely accepted method in the substitution of low-efficiency and high-cost labor-intensive apple harvesting. Although most current research on apple-picking robots works well in the laboratory, most of them are unworkable in an orchard environment due to unsatisfied apple positioning performance. In general, an accurate, fast, and widely used apple positioning method for an apple-picking robot remains lacking. Some positioning methods with detection-based deep learning reached an acceptable performance in some orchards. However, apples occluded by apples, leaves, and branches are ignored in these methods with detection-based deep learning. Therefore, an apple binocular positioning method based on a Mask Region Convolutional Neural Network (Mask R-CNN, an instance segmentation network) was developed to achieve better apple positioning. A binocular camera (Bumblebee XB3) was adapted to capture binocular images of apples. After that, a Mask R-CNN was applied to implement instance segmentation of apple binocular images. Then, template matching with a parallel polar line constraint was applied for the stereo matching of apples. Finally, four feature point pairs of apples from binocular images were selected to calculate disparity and depth. The trained Mask R-CNN reached a detection and segmentation intersection over union (IoU) of 80.11% and 84.39%, respectively. The coefficient of variation (CoV) and positioning accuracy (PA) of binocular positioning were 5.28 mm and 99.49%, respectively. The research developed a new method to fulfill binocular positioning with a segmentation-based neural network. Full article
Show Figures

Figure 1

14 pages, 2624 KiB  
Article
Multi-Scale and Multi-Match for Few-Shot Plant Disease Image Semantic Segmentation
by Wenji Yang, Wenchao Hu, Liping Xie and Zhenji Yang
Agronomy 2022, 12(11), 2847; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy12112847 - 15 Nov 2022
Cited by 1 | Viewed by 1406
Abstract
Currently, deep convolutional neural networks have achieved great achievements in semantic segmentation tasks, but existing methods all require a large number of annotated images for training and do not have good scalability for new objects. Therefore, few-shot semantic segmentation methods that can identify [...] Read more.
Currently, deep convolutional neural networks have achieved great achievements in semantic segmentation tasks, but existing methods all require a large number of annotated images for training and do not have good scalability for new objects. Therefore, few-shot semantic segmentation methods that can identify new objects with only one or a few annotated images are gradually gaining attention. However, the current few-shot segmentation methods cannot segment plant diseases well. Based on this situation, a few-shot plant disease semantic segmentation model with multi-scale and multi-prototypes match (MPM) is proposed. This method generates multiple prototypes and multiple query feature maps, and then the relationships between prototypes and query feature maps are established. Specifically, the support feature and query feature are first extracted from the high-scale layers of the feature extraction network; subsequently, masked average pooling is used for the support feature to generate prototypes for a similarity match with the query feature. At the same time, we also fuse low-scale features and high-scale features to generate another support feature and query feature that mix detailed features, and then a new prototype is generated through masked average pooling to establish a relationship with the query feature of this scale. Subsequently, in order to solve the shortcoming of traditional cosine similarity and lack of spatial distance awareness, a CES (cosine euclidean similarity) module is designed to establish the relationship between prototypes and query feature maps. To verify the superiority of our method, experiments are conducted on our constructed PDID-5i dataset, and the mIoU is 40.5%, which is 1.7% higher than that of the original network. Full article
Show Figures

Figure 1

13 pages, 3146 KiB  
Article
Lightweight Blueberry Fruit Recognition Based on Multi-Scale and Attention Fusion NCBAM
by Wenji Yang, Xinxin Ma, Wenchao Hu and Pengjie Tang
Agronomy 2022, 12(10), 2354; https://0-doi-org.brum.beds.ac.uk/10.3390/agronomy12102354 - 29 Sep 2022
Cited by 6 | Viewed by 1778
Abstract
Blueberries are widely planted because of their rich nutritional value. Due to the problems of dense adhesion and serious occlusion of blueberries during the growth process, the development of automatic blueberry picking has been seriously hindered. Therefore, using deep learning technology to achieve [...] Read more.
Blueberries are widely planted because of their rich nutritional value. Due to the problems of dense adhesion and serious occlusion of blueberries during the growth process, the development of automatic blueberry picking has been seriously hindered. Therefore, using deep learning technology to achieve rapid and accurate positioning of blueberries in the case of dense adhesion and serious occlusion is one of the key technologies to achieve the automatic picking of blueberries. To improve the positioning accuracy, this paper designs a blueberry recognition model based on the improved YOLOv5. Firstly, the blueberry dataset is constructed. On this basis, we design a new attention module, NCBAM, to improve the ability of the backbone network to extract blueberry features. Secondly, the small target detection layer is added to improve the multi-scale recognition ability of blueberries. Finally, the C3Ghost module is introduced into the backbone network, which reduces the number of model parameters while ensuring the accuracy, thereby reducing the complexity of the model to a certain extent. In order to verify the effectiveness of the model, this paper conducts experiments on the self-made blueberry dataset, and the mAP is 83.2%, which is 2.4% higher than the original network. It proves that the proposed method is beneficial to improve the blueberry recognition accuracy of the model. Full article
Show Figures

Figure 1

Back to TopTop