Next Article in Journal
Estimation of Organizational Competitiveness by a Hybrid of One-Dimensional Convolutional Neural Networks and Self-Organizing Maps Using Physiological Signals for Emotional Analysis of Employees
Next Article in Special Issue
Orchard Mapping with Deep Learning Semantic Segmentation
Previous Article in Journal
A High-Resolution Reflective Microwave Planar Sensor for Sensing of Vanadium Electrolyte
Previous Article in Special Issue
Soft Grippers for Automatic Crop Harvesting: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Machine Learning in Agriculture: A Comprehensive Updated Review

1
Centre of Research and Technology-Hellas (CERTH), Institute for Bio-Economy and Agri-Technology (IBO), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece
2
Department of Agriculture, Forestry and Food Science (DISAFA), University of Turin, Largo Braccini 2, 10095 Grugliasco, Italy
3
FarmB Digital Agriculture P.C., Doiranis 17, GR 54639 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Submission received: 6 April 2021 / Revised: 21 May 2021 / Accepted: 24 May 2021 / Published: 28 May 2021
(This article belongs to the Collection Sensors and Robotics for Digital Agriculture)

Abstract

:
The digital transformation of agriculture has evolved various aspects of management into artificial intelligent systems for the sake of making value from the ever-increasing data originated from numerous sources. A subset of artificial intelligence, namely machine learning, has a considerable potential to handle numerous challenges in the establishment of knowledge-based farming systems. The present study aims at shedding light on machine learning in agriculture by thoroughly reviewing the recent scholarly literature based on keywords’ combinations of “machine learning” along with “crop management”, “water management”, “soil management”, and “livestock management”, and in accordance with PRISMA guidelines. Only journal papers were considered eligible that were published within 2018–2020. The results indicated that this topic pertains to different disciplines that favour convergence research at the international level. Furthermore, crop management was observed to be at the centre of attention. A plethora of machine learning algorithms were used, with those belonging to Artificial Neural Networks being more efficient. In addition, maize and wheat as well as cattle and sheep were the most investigated crops and animals, respectively. Finally, a variety of sensors, attached on satellites and unmanned ground and aerial vehicles, have been utilized as a means of getting reliable input data for the data analyses. It is anticipated that this study will constitute a beneficial guide to all stakeholders towards enhancing awareness of the potential advantages of using machine learning in agriculture and contributing to a more systematic research on this topic.

1. Introduction

1.1. General Context of Machine Learning in Agriculture

Modern agriculture has to cope with several challenges, including the increasing call for food, as a consequence of the global explosion of earth’s population, climate changes [1], natural resources depletion [2], alteration of dietary choices [3], as well as safety and health concerns [4]. As a means of addressing the above issues, placing pressure on the agricultural sector, there exists an urgent necessity for optimizing the effectiveness of agricultural practices by, simultaneously, lessening the environmental burden. In particular, these two essentials have driven the transformation of agriculture into precision agriculture. This modernization of farming has a great potential to assure sustainability, maximal productivity, and a safe environment [5]. In general, smart farming is based on four key pillars in order to deal with the increasing needs; (a) optimal natural resources’ management, (b) conservation of the ecosystem, (c) development of adequate services, and (d) utilization of modern technologies [6]. An essential prerequisite of modern agriculture is, definitely, the adoption of Information and Communication Technology (ICT), which is promoted by policy-makers around the world. ICT can indicatively include farm management information systems, humidity and soil sensors, accelerometers, wireless sensor networks, cameras, drones, low-cost satellites, online services, and automated guided vehicles [7].
The large volume of data, which is produced by digital technologies and usually referred to as “big data”, needs large storage capabilities in addition to editing, analyzing, and interpreting. The latter has a considerable potential to add value for society, environment, and decision-makers [8]. Nevertheless, big data encompass challenges on account of their so-called “5-V” requirements; (a) Volume, (b) Variety, (c) Velocity, (d) Veracity, and (e) Value [9]. The conventional data processing techniques are incapable of meeting the constantly growing demands in the new era of smart farming, which is an important obstacle for extracting valuable information from field data [10]. To that end, Machine Learning (ML) has emerged, which is a subset of artificial intelligence [11], by taking advantage of the exponential computational power capacity growth.
There is a plethora of applications of ML in agriculture. According to the recent literature survey by Liakos et al. [12], regarding the time period of 2004 to 2018, four generic categories were identified (Figure 1). These categories refer to crop, water, soil, and livestock management. In particular, as far as crop management is concerned, it represented the majority of the articles amongst all categories (61% of the total articles) and was further sub-divided into:
  • Yield prediction;
  • Disease detection;
  • Weed detection;
  • Crop recognition;
  • Crop quality.
The generic categories dealing with the management of water and soil were found to be less investigated, corresponding cumulatively to 20% of the total number of papers (10% for each category).
Finally, two main sub-categories were identified for the livestock-related applications corresponding to a total 19% of journal papers:
  • Livestock production;
  • Animal welfare.

1.2. Open Problems Associated with Machine Learning in Agriculture

Due to the broad range of applications of ML in agriculture, several reviews have been published in this research field. The majority of these review studies have been dedicated to crop disease detection [13,14,15,16], weed detection [17,18], yield prediction [19,20], crop recognition [21,22], water management [23,24], animal welfare [25,26], and livestock production [27,28]. Furthermore, other studies were concerned with the implementation of ML methods regarding the main grain crops by investigating different aspects including quality and disease detection [29]. Finally, focus has been paid on big data analysis using ML, aiming at finding out real-life problems that originated from smart farming [30], or dealing with methods to analyze hyperspectral and multispectral data [31].
Although ML in agriculture has made considerable progress, several open problems remain, which have some common points of reference, despite the fact that the topic covers a variety of sub-fields. According to [23,24,28,32], the main problems are associated with the implementation of sensors on farms for numerous reasons, including high costs of ICT, traditional practices, and lack of information. In addition, the majority of the available datasets do not reflect realistic cases, since they are normally generated by a few people getting images or specimens in a short time period and from a limited area [15,21,22,23]. Consequently, more practical datasets coming from fields are required [18,20]. Moreover, the need for more efficient ML algorithms and scalable computational architectures has been pointed out, which can lead to rapid information processing [18,22,23,31]. The challenging background, when it comes to obtaining images, video, or audio recordings, has also been mentioned owing to changes in lighting [16,29], blind spots of cameras, environmental noise, and simultaneous vocalizations [25]. Another important open problem is that the vast majority of farmers are non-experts in ML and, thus, they cannot fully comprehend the underlying patterns obtained by ML algorithms. For this reason, more user-friendly systems should be developed. In particular, simple systems, being easy to understand and operate, would be valuable, as for example a visualization tool with a user-friendly interface for the correct presentation and manipulation of data [25,30,31]. Taking into account that farmers are getting more and more familiar with smartphones, specific smartphone applications have been proposed as a possible solution to address the above challenge [15,16,21]. Last but not least, the development of efficient ML techniques by incorporating expert knowledge from different stakeholders should be fostered, particularly regarding computing science, agriculture, and the private sector, as a means of designing realistic solutions [19,22,24,33]. As stated in [12], currently, all of the efforts pertain to individual solutions, which are not always connected with the process of decision-making, as seen for example in other domains.

1.3. Aim of the Present Study

As pointed out above, because of the multiple applications of ML in agriculture, several review studies have been published recently. However, these studies usually concentrate purely on one sub-field of agricultural production. Motivated by the current tremendous progress in ML, the increasing interest worldwide, and its impact in various do-mains of agriculture, a systematic bibliographic survey is presented on the range of the categories proposed in [12], which were summarized in Figure 1. In particular, we focus on reviewing the relevant literature of the last three years (2018–2020) for the intention of providing an updated view of ML applications in agricultural systems. In fact, this work is an updated continuation of the work presented at [12]; following, consequently, exactly the same framework and inclusion criteria. As a consequence, the scholarly literature was screened in order to cover a broad spectrum of important features for capturing the current progress and trends, including the identification of: (a) the research areas which are interested mostly in ML in agriculture along with the geographical distribution of the contributing organizations, (b) the most efficient ML models, (c) the most investigated crops and animals, and (d) the most implemented features and technologies.
As will be discussed next, overall, a 745% increase in the number of journal papers took place in the last three years as compared to [12], thus justifying the need for a new updated review on the specific topic. Moreover, crop management remained as the most investigated topic, with a number of ML algorithms having been exploited as a means of tackling the heterogeneous data that originated from agricultural fields. As compared to [12], more crop and animal species have been investigated by using an extensive range of input parameters coming mainly from remote sensing, such as satellites and drones. In addition, people from different research fields have dealt with ML in agriculture, hence, contributing to the remarkable advancement in this field.

1.4. Outline of the Paper

The remainder of this paper is structured as follows. The second section briefly describes the fundamentals of ML along with the subject of the four generic categories for the sake of better comprehension of the scope of the present study. The implemented methodology, along with the inclusive criteria and the search engines, is analyzed in the third section. The main performance metrics, which were used in the selected articles, are also presented in this section. The main results are shown in the fourth section in the form of bar and pie charts, while in the fifth section, the main conclusions are drawn by also discussing the results from a broader perspective. Finally, all the selected journal papers are summarized in Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8 and Table A9, in accordance with their field of application, and presented in the Appendix A, together with Table A10 and Table A11 that contain commonly used abbreviations, with the intention of not disrupting the flow of the main text.

2. Background

2.1. Fundamentals of Machine Learning: A Brief Overview

In general, the objective of ML algorithms is to optimize the performance of a task, via exploiting examples or past experience. In particular, ML can generate efficient relationships regarding data inputs and reconstruct a knowledge scheme. In this data-driven methodology, the more data are used, the better ML works. This is similar to how well a human being performs a particular task by gaining more experience [34]. The central outcome of ML is a measure of generalizability; the degree to which the ML algorithm has the ability to provide correct predictions, when new data are presented, on the basis of learned rules originated from preceding exposure to similar data [35]. More specifically, data involve a set of examples, which are described by a group of characteristics, usually called features. Broadly speaking, ML systems operate at two processes, namely the learning (used for training) and testing. In order to facilitate the former process, these features commonly form a feature vector that can be binary, numeric, ordinal, or nominal [36]. This vector is utilized as an input within the learning phase. In brief, by relying on training data, within the learning phase, the machine learns to perform the task from experience. Once the learning performance reaches a satisfactory point (expressed through mathematical and statistical relationships), it ends. Subsequently, the model that was developed through the training process can be used to classify, cluster, or predict.
An overview of a typical ML system is illustrated in Figure 2. With the intention of forming the derived complex raw data into a suitable state, a pre-processing effort is required. This usually includes: (a) data cleaning for removing inconsistent or missing items and noise, (b) data integration, when many data sources exist and (c) data transformation, such as normalization and discretization [37]. The extraction/selection feature aims at creating or/and identifying the most informative subset of features in which, subsequently, the learning model is going to be implemented throughout the training phase [38]. Regarding the feedback loop, which is depicted in Figure 2, it serves for adjustments pertaining to the feature extraction/selection unit as well as the pre-processing one that further improves the overall learning model’s performance. During the phase of testing, previously unseen samples are imported to the trained model, which are usually represented as feature vectors. Finally, an appropriate decision is made by the model (for example, classification or regression) in reliance of the features existing in each sample. Deep learning, a subfield of ML, utilizes an alternative architecture via shifting the process of converting raw data to features (feature engineering) to the corresponding learning system. Consequently, the feature extraction/selection unit is absent, resulting in a fully trainable system; it starts from a raw input and ends with the desired output [39,40].
Based on the learning type, ML can be classified according to the relative literature [41,42] as:
  • Supervised learning: The input and output are known and the machine tries to find the optimal way to reach an output given an input;
  • Unsupervised learning: No labels are provided, leaving the learning algorithm itself to generate structure within its input;
  • Semi-supervised learning: Input data constitute a mixture of labeled and non-labeled data;
  • Reinforcement learning: Decisions are made towards finding out actions that can lead to the more positive outcome, while it is solely determined by trial and error method and delayed outcome.
Nowadays, ML is used in facilitating several management aspects in agriculture [12] and in a plethora of other applications, such as image recognition [43], speech recognition [44], autonomous driving [45], credit card fraud detection [46], stock market forecasting [47], fluid mechanics [48], email, spam and malware filtering [49], medical diagnosis [40], contamination detection in urban water networks [50], and activity recognition [51], to mention but a few.

2.2. Brief Description of the Four Generic Categories

2.2.1. Crop Management

The crop management category involves versatile aspects that originated from the combination of farming techniques in the direction of managing the biological, chemical and physical crop environment with the aim of reaching both quantitative and qualitative targets [52]. Using advanced approaches to manage crops, such as yield prediction, disease detection, weed detection, crop recognition, and crop quality, contributes to the increase of productivity and, consequently, the financial income. The above aspects constitute key goals of precision agriculture.

Yield Prediction

In general, yield prediction is one of the most important and challenging topics in modern agriculture. An accurate model can help, for instance, the farm owners to take informed management decisions on what to grow towards matching the crop to the existing market’s demands [20]. However, this is not a trivial task; it consists of various steps. Yield prediction can be determined by several factors such as environment, management practices, crop genotypic and phenotypic characteristics, and their interactions. Hence, it necessitates a fundamental comprehension of the relationship between these interactive factors and yield. In turn, identifying such kinds of relationships mandates comprehensive datasets along with powerful algorithms such as ML techniques [53].

Disease Detection

Crop diseases constitute a major threat in agricultural production systems that deteriorate yield quality and quantity at production, storage, and transportation level. At farm level, reports on yield losses, due to plant diseases, are very common [54]. Furthermore, crop diseases pose significant risks to food security at a global scale. Timely identification of plant diseases is a key aspect for efficient management. Plant diseases may be provoked by various kinds of bacteria, fungi, pests, viruses, and other agents. Disease symptoms, namely the physical evidence of the presence of pathogens and the changes in the plants’ phenotype, may consist of leaf and fruit spots, wilting and color change [55], curling of leaves, etc. Historically, disease detection was conducted by expert agronomists, by performing field scouting. However, this process is time-consuming and solely based on visual inspection. Recent technological advances have made commercially available sensing systems able to identify diseased plants before the symptoms become visible. Furthermore, in the past few years, computer vision, especially by employing deep learning, has made remarkable progress. As highlighted by Zhang et al. [56], who focused on identifying cucumber leaf diseases by utilizing deep learning, due to the complex environmental background, it is beneficial to eliminate background before model training. Moreover, accurate image classifiers for disease diagnosis need a large dataset of both healthy and diseased plant images. In reference to large-scale cultivations, such kinds of automated processes can be combined with autonomous vehicles, to timely identify phytopathological problems by implementing regular inspections. Furthermore, maps of the spatial distribution of the plant disease can be created, depicting the zones in the farm where the infection has been spread [57].

Weed Detection

As a result of their prolific seed production and longevity, weeds usually grow and spread invasively over large parts of the field very fast, competing with crops for the resources, including space, sunlight, nutrients, and water availability. Besides, weeds frequently arise sooner than crops without having to face natural enemies, a fact that adversely affects crop growth [18]. In order to prevent crop yield reduction, weed control is an important management task by either mechanical treatment or application of herbicides. Mechanical treatment is, in most cases, difficult to be performed and ineffective if not properly performed, making herbicide application the most widely used operation. Using large quantities of herbicides, however, turns out to be both costly and detrimental for the environment, especially in the case of uniform application without taking into account the spatial distribution of the weeds. Remarkably, long-term herbicide use is very likely to make weeds more resistant, thus, resulting in more demanding and expensive weed control. In recent years, considerable achievements have been made pertaining to the differentiation of weeds from crops on the basis of smart agriculture. This discrimination can be accomplished by using remote or proximal sensing with sensors attached on satellites, aerial, and ground vehicles, as well as unmanned vehicles (both ground (UGV) and aerial (UAV)). The transformation of data gathered by UAVs into meaningful information is, however, still a challenging task, since both data collection and classification need painstaking effort [58]. ML algorithms coupled with imaging technologies or non-imaging spectroscopy can allow for real-time differentiation and localization of target weeds, enabling precise application of herbicides to specific zones, instead of spraying the entire fields [59] and planning of the shortest weeding path [60].

Crop Recognition

Automatic recognition of crops has gained considerable attention in several scientific fields, such as plant taxonomy, botanical gardens, and new species discovery. Plant species can be recognized and classified via analysis of various organs, including leaves, stems, fruits, flowers, roots, and seeds [61,62]. Using leaf-based plant recognition seems to be the most common approach by examining specific leaf’s characteristics like color, shape, and texture [63]. With the broader use of satellites and aerial vehicles as means of sensing crop properties, crop classification through remote sensing has become particularly popular. As in the above sub-categories, the advancement on computer software and image processing devices combined with ML has led to the automatic recognition and classification of crops.

Crop Quality

Crop quality is very consequential for the market and, in general, is related to soil and climate conditions, cultivation practices and crop characteristics, to name a few. High quality agricultural products are typically sold at better prices, hence, offering larger earnings to farmers. For instance, as regards fruit quality, flesh firmness, soluble solids content, and skin color are among the most ordinary maturity indices utilized for harvesting [64]. The timing of harvesting greatly affects the quality characteristics of the harvested products in both high value crops (tree crops, grapes, vegetables, herbs, etc.) and arable crops. Therefore, developing decision support systems can aid farmers in taking appropriate management decisions for increased quality of production. For example, selective harvesting is a management practice that may considerably increase quality. Furthermore, crop quality is closely linked with food waste, an additional challenge that modern agriculture has to cope with, since if the crop deviates from the desired shape, color, or size, it may be thrown away. Similarly to the above sub-section, ML algorithms combined with imaging technologies can provide encouraging results.

2.2.2. Water Management

The agricultural sector constitutes the main consumer of available fresh water on a global scale, as plant growth largely relies on water availability. Taking into account the rapid depletion rate of a lot of aquifers with negligible recharge, more effective water management is needed for the purpose of better conserving water in terms of accomplishing a sustainable crop production [65]. Effective water management can also lead to the improvement of water quality as well as reduction of pollution and health risks [66]. Recent research on precision agriculture offers the potential of variable rate irrigation so as to attain water savings. This can be realized by implementing irrigation at rates, which vary according to field variability on the basis of specific water requirements of separate management zones, instead of using a uniform rate in the entire field. The effectiveness and feasibility of the variable rate irrigation approach depend on agronomic factors, including topography, soil properties, and their effect on soil water in order to accomplish both water savings and yield optimization [67]. Carefully monitoring the status of soil water, crop growth conditions, and temporal and spatial patterns in combination with weather conditions monitoring and forecasting, can help in irrigation programming and efficient management of water. Among the utilized ICTs, remote sensing can provide images with spatial and temporal variability associated with the soil moisture status and crop growth parameters for precision water management. Interestingly, water management is challenging enough in arid areas, where groundwater sources are used for irrigation, with the precipitation providing only part of the total crop evapotranspiration (ET) demands [68].

2.2.3. Soil Management

Soil, a heterogeneous natural resource, involves mechanisms and processes that are very complex. Precise information regarding soil on a regional scale is vital, as it contributes towards better soil management consistent with land potential and, in general, sustainable agriculture [5]. Better management of soil is also of great interest owing to issues like land degradation (loss of the biological productivity), soil-nutrient imbalance (due to fertilizers overuse), and soil erosion (as a result of vegetation overcutting, improper crop rotations rather than balanced ones, livestock overgrazing, and unsustainable fallow periods) [69]. Useful soil properties can entail texture, organic matter, and nutrients content, to mention but a few. Traditional soil assessment methods include soil sampling and laboratory analysis, which are normally expensive and take considerable time and effort. However, remote sensing and soil mapping sensors can provide low-cost and effortless solution for the study of soil spatial variability. Data fusion and handling of such heterogeneous “big data” may be important drawbacks, when traditional data analysis methods are used. ML techniques can serve as a trustworthy, low-cost solution for such a task.

2.2.4. Livestock Management

It is widely accepted that livestock production systems have been intensified in the context of productivity per animal. This intensification involves social concerns that can influence consumer perception of food safety, security, and sustainability, based on animal welfare and human health. In particular, monitoring both the welfare of animals and overall production is a key aspect so as to improve production systems [70]. The above fields take place in the framework of precision livestock farming, aiming at applying engineering techniques to monitor animal health in real time and recognizing warning messages, as well as improving the production at the initial stages. The role of precision livestock farming is getting more and more significant by supporting the decision-making processes of livestock owners and changing their role. It can also facilitate the products’ traceability, in addition to monitoring their quality and the living conditions of animals, as required by policy-makers [71]. Precision livestock farming relies on non-invasive sensors, such as cameras, accelerometers, gyroscopes, radio-frequency identification systems, pedometers, and optical and temperature sensors [25]. IoT sensors leverage variable physical quantities (VPQs) as a means of sensing temperature, sound, humidity, etc. For instance, IoT sensors can warn if a VPQ falls out of regular limits in real-time, giving valuable information regarding individual animals. As a result, the cost of repetitively and arduously checking each animal can be reduced [72]. In order to take advantage of the large amounts of data, ML methodologies have become an integral part of modern livestock farming. Models can be developed that have the capability of defining the manner a biological system operates, relying on causal relationships and exploiting this biological awareness towards generating predictions and suggestions.

Animal Welfare

There is an ongoing concern for animal welfare, since the health of animals is strongly associated with product quality and, as a consequence, predominantly with the health of consumers and, secondarily, with the improvement of economic efficiency [73]. There exist several indexes for animal welfare evaluation, including physiological stress and behavioral indicators. The most commonly used indicator is animal behavior, which can be affected by diseases, emotions, and living conditions, which have the potential to demonstrate physiological conditions [25]. Sensors, commonly used to detect behavioral changes (for example, changes in water or food consumption, reduced animal activity), include microphone systems, cameras, accelerometers, etc.

Livestock Production

The use of sensor technology, along with advanced ML techniques, can increase livestock production efficiency. Given the impact of practices of animal management on productive elements, livestock owners are getting cautious of their asset. However, as the livestock holdings get larger, the proper consideration of every single animal is very difficult. From this perspective, the support to farmers via precision livestock farming, mentioned above, is an auspicious step for aspects associated with economic efficiency and establishment of sustainable workplaces with reduced environmental footprint [74]. Generally, several models have been used in animal production, with their intentions normally revolving around growing and feeding animals in the best way. However, the large volumes of data being involved, again, call for ML approaches.

3. Methods

3.1. Screening of the Relative Literature

In order to identify the relevant studies concerning ML in respect to different aspects of management in agriculture, the search engines of Scopus, Google Scholar, ScienceDirect, PubMed, Web of Science, and MDPI were utilized. In addition, keywords’ combinations of “machine learning” in conjunction with each of the following: “crop management”, “water management”, “soil management”, and “livestock management” were used. Our intention was to filter the literature on the same framework as [12]; however, focusing solely within the period 2018–2020. Once a relevant study was being identified, the references of the paper at hand were being scanned to find studies that had not been found throughout the initial searching procedure. This process was being iterated until no relevant studies occurred. In this stage, only journal papers were considered eligible. Thus, non-English studies, conferences papers, chapters, reviews, as well as Master and Doctoral Theses were excluded. The latest search was conducted on 15 December 2020. Subsequently, the abstract of each paper was being reviewed, while, at a next stage, the full text was being read to decide its appropriateness. After a discussion between all co-authors with reference to the appropriateness of the selected papers, some of them were excluded, in the case they did not meet the two main inclusion criteria, namely: (a) the paper was published within 2018–2020 and (b) the paper referred to one of the categories and sub-categories, which were summarized in Figure 1. Finally, the papers were classified in these sub-categories. Overall, 338 journal papers were identified. The flowchart of the present review methodology is depicted in Figure 3, based on the PRISMA guidelines [75], along with information about at which stage each exclusive criterion was imposed similarly to recent systematic review studies such as [72,76,77,78].

3.2. Definition of the Performance Metrics Commonly Used in the Reviewed Studies

In this subsection, the most commonly used performance metrics of the reviewed papers are briefly described. In general, these metrics are utilized in an effort to provide a common measure to evaluate the ML algorithms. The selection of the appropriate metrics is very important, since: (a) how the algorithm’s performance is measured relies on these metrics and (b) the metric itself can influence the way the significance of several characteristics is weighted.
Confusion matrix constitutes one of the most intuitive metrics towards finding the correctness of a model. It is used for classification problems, where the result can be of at least two types of classes. Let us consider a simple example, by giving a label to a target variable: for example, “1” when a plant has been infected with a disease and “0” otherwise. In this simplified case, the confusion matrix (Figure 4) is a 2 × 2 table having two dimensions, namely “Actual” and “Predicted”, while its dimensions have the outcome of the comparison between the predictions with the actual class label. Concerning the above simplified example, this outcome can acquire the following values:
  • True Positive (TP): The plant has a disease (1) and the model classifies this case as diseased (1);
  • True Negative (TN): The plant does not have a disease (0) and the model classifies this case as a healthy plant (0);
  • False Positive (FP): The plant does not have a disease (0), but the model classifies this case as diseased (1);
  • False Negative (FN): The plant has a disease (1), but the model classifies this case as a healthy plant (0).
As can be shown in Table 1, the aforementioned values can be implemented in order to estimate the performance metrics, typically observed in classification problems [79].
Other common evaluation metrics were the coefficient of correlation ( R ), coefficient of determination ( R 2 ; basically, the square of the correlation coefficient), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Mean Squared Error (MSE), which can be given via the following relationships [80,81]:
R = T · t = 1 T Z t · X t t = 1 T Z t · t = 1 T X t T · t = 1 T Z t 2 t = 1 T Z t 2 · T · t = 1 T X t 2 t = 1 T X t 2 ,
M A E = 1 T · t = 1 T Z t X t ,
M A P E = 1 T · t = 1 T Z t X t Z t ,
M S E = 1 T · t = 1 T Z t X t 2 ,
where X t and Z t correspond to the predicted and real value, respectively, t stands for the iteration at each point, while T for the testing records number. Accordingly, low values of MAE, MAPE, and MSE values denote a small error and, hence, better performance. In contrast, R 2 near 1 is desired, which demonstrates better model performance and also that the regression curve efficiently fits the data.

4. Results

4.1. Preliminary Data Visualization Analysis

Graphical representation of data related to the reviewed studies, by using maps, bar or pie charts, for example, can provide an efficient approach to demonstrate and interpret the patterns of data. The data visualization analysis, as it usually refers to, can be vital in the context of analyzing large amounts of data and has gained remarkable attention in the past few years, including review studies. Indicatively, significant results can be deduced in an effort to identify: (a) the most contributing authors and organizations, (b) the most contributing international journals (or equivalently which research fields are interested in this topic), and (c) the current trends in this field [82].

4.1.1. Classification of the Studies in Terms of Application Domain

As can be seen in the flowchart of the present methodology (Figure 3), the literature survey on ML in agriculture resulted in 338 journal papers. Subsequently, these studies were classified into the four generic categories as well as into their sub-categories, as already mentioned above. Figure 5 depicts the aforementioned papers’ distribution. In particular, the majority of the studies were intended for crop management (68%), while soil management (10%), water management (10%), and livestock management (12% in total; animal welfare: 7% and livestock production: 5%) had almost equal contribution in the present bibliographic survey. Focusing on crop management, the most contributing sub-categories were yield prediction (20%) and disease detection (19%). The former research field arises as a consequence of the increasing interest of farmers in taking decisions based on efficient management that can lead to the desired yield. Disease detection, on the other hand, is also very important, as diseases constitute a primary menace for food security and quality assurance. Equal percentages (13%) were observed for weed detection and crop recognition, both of which are essential in crop management at farm and agricultural policy making level. Finally, examination of crop quality was relatively scarce corresponding to 3% of all studies. This can be attributed to the complexity of monitoring and modeling the quality-related parameters.
In this fashion, it should be mentioned again that all the selected journal papers are summarized in Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8 and Table A9, depending on their field of application, and presented in the Appendix A. The columns of the tables correspond (from left to right) to the “Reference number” (Ref), “Input Data”, “Functionality”, “Models/Algorithms”, and “Best Output”. One additional column exists for the sub-categories belonging in crop management, namely “Crop”, whereas the corresponding column in the sub-categories pertaining to livestock management refers to “Animal”. The present systematic review deals with a plethora of different ML models and algorithms. For the sake of brevity, the commonly used abbreviations are used instead of the entire names, which are summarized in Table A10 and Table A11 (presented also in the Appendix A). The list of the aforementioned Tables, along with their content, is listed in Table 2.

4.1.2. Geographical Distribution of the Contributing Organizations

The subject of this sub-section is to find out the geographical distribution of all the contributing organizations in ML applications in agriculture. To that end, the author’s affiliation was taken into account. In case a paper included more than one author, which was the most frequent scenario, each country could contribute only once in the final map chart (Figure 6), similarly to [83,84]. As can be gleaned from Figure 6, investigating ML in agriculture is distributed worldwide, including both developed and developing economies. Remarkably, out of the 55 contributing countries, the least contribution originated from African countries (3%), whereas the major contribution came from Asian countries (55%). The latter result is attributed mainly to the considerable contribution of Chinese (24.9%) as well as Indian organizations (10.1%). USA appeared to be the second most contributing country with 20.7% percentage, while Australia (9.5%), Spain (6.8%), Germany (5.9%), Brazil, UK, and Iran (5.62%) seem to be particularly interested in ML in agriculture. It should be stressed that livestock management, which is a relatively different sub-field comparing to crop, water, and soil management, was primary examined from studies coming from Australia, USA, China, and UK, while all the papers regarding Ireland were focused on animals. Finally, another noteworthy observation is that a large number of articles were a result of international collaboration, with the synergy of China and USA standing out.

4.1.3. Distribution of the Most Contributing Journal Papers

For the purpose of identifying the research areas that are mostly interested in ML in agriculture, the most frequently appeared international journal papers are depicted in Figure 7. In total, there were 129 relevant journals. However, in this bar chart, only the journals contributing with at least 4 papers are presented for brevity. As a general remark, remote sensing was of particular importance, since reliable data from satellites and UAV, for instance, constitute valuable input data for the ML algorithms. In addition, smart farming, environment, and agricultural sustainability were of central interest. Journals associated with computational techniques were also presented with considerable frequency. A typical example of such type of journals, which was presented in the majority of the studies with a percentage of 19.8%, was “Computers and Electronics in Agriculture”. This journal aims at providing the advances in relation to the application of computers and electronic systems for solving problems in plant and animal production.
The “Remote Sensing” and “Sensors” journals followed with approximately 11.8% and 6.5% of the total number of publications, respectively. These are cross-sectoral journals that are concentrated on applications of science and sensing technologies in various fields, including agriculture. Other journals, covering this research field, were also “IEEE Access” and “International Journal of Remote Sensing” with approximately 2.1% and 1.2% contribution, respectively. Moreover, agriculture-oriented journals were also presented in Figure 7, including “Precision Agriculture”, “Frontiers in Plant Science”, “Agricultural and Forest Meteorology”, and “Agricultural Water Management” with 1–3% percentage. These journals deal with several aspects of agriculture ranging from management strategies (so as to incorporate spatial and temporal data as a means of optimizing productivity, resource use efficiency, sustainability and profitability of agricultural production) up to crop molecular genetics and plant pathogens. An interdisciplinary journal concentrating on soil functions and processes also appeared with 2.1%, namely “Geoderma”, plausibly covering the soil management generic category. Finally, several journals focusing on physics and applied natural sciences, such as “Applied Sciences” (2.7%), “Scientific Reports” (1.8%), “Biosystems Engineering” (1.5%), and “PLOS ONE” (1.5%), had a notable contribution to ML studies. As a consequence, ML in agriculture concerns several disciplines and constitutes a fundamental area for developing various techniques, which can be beneficial to other fields as well.

4.2. Synopsis of the Main Features Associated with the Relative Literature

4.2.1. Machine Learning Models Providing the Best Results

A wide range of ML algorithms was implemented in the selected studies; their abbreviations are given in Table A11. The ML algorithms that were used by each study as well as those that provided the best output have been listed in the last two columns of Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8 and Table A9. These algorithms can be classified into the eight broad families of ML models, which are summarized in Table A10. Figure 8 focuses on the best performed ML models as a means of capturing a broad picture of the current situation and demonstrating advancement similarly to [12].
As can be demonstrated in Figure 8, the most frequent ML model providing the best output was, by far, Artificial Neural Networks (ANNs), which appeared in almost half of the reviewed studies (namely, 51.8%). More specifically, ANN models provided the best results in the majority of the studies concerning all sub-categories. ANNs have been inspired by the biological neural networks that comprise human brains [85], while they allow for learning via examples from representative data describing a physical phenomenon. A distinct characteristic of ANNs is that they can develop relationships between dependent and independent variables, and thus extract useful information from representative datasets. ANN models can offer several benefits, such as their ability to handle noisy data [86], a situation that is very common in agricultural measurements. Among the most popular ANNs are the Deep Neural Networks (DNNs), which utilize multiple hidden layers between input and output layers. DNNs can be unsupervised, semi-supervised, or supervised. A usual kind of DNNs are the Convolutional Neural Networks (CNNs), whose layers, unlike common neural networks, can set up neurons in three dimensions [87]. In fact, CNNs were presented as the algorithms that provide the best output in all sub-categories, with an almost 50% of the individual percentage of ANNs. As stressed in recent studies, such as that of Yang et al. [88], CNNs are receiving more and more attention because of their efficient results when it comes to detection through images’ processing.
Recurrent Neural Networks (RNNs) followed, representing approximately 10% of ANNs, with Long Short-Term Memory (LSTM) standing out. They are called “recurrent” as they carry out the same process for every element, with the previous computations determining the current output, while they have a “memory” that stores information pertaining to what has been calculated so far. RNNs can face problems concerning vanishing gradients and inability to “memorize” many sequential data. Towards addressing these issues, the cell structures of LSTM can control which part of information will be either stored in long memory or discarded, resulting in optimization of the memorizing process [51]. Moreover, Multi-Layer Perceptron (MLP), Fully Convolutional Networks (FCNs), and Radial Basis Function Networks (RBFNs) appeared to have the best performance in almost 3–5% of ANNs. Finally, ML algorithms, belonging to ANNs with low frequency, were Back-Propagation Neural Networks (BPNNs), Modular Artificial Neural Networks (MANNs), Deep Belief Networks (DBNs), Adaptive-Neuro Fuzzy Inference System (ANFIS), Subtractive Clustering Fuzzy Inference System (SCFIS), Takagi-Sugeno Fuzzy Neural Networks (TS-FNN), and Feed Forward Neural Networks (FFNNs).
The second most accurate ML model was Ensemble Learning (EL), contributing to the ML models used in agricultural systems with approximately 22.2%. EL is a concise term for methods that integrate multiple inducers for the purpose of making a decision, normally in supervised ML tasks. An inducer is an algorithm, which gets as an input a number of labeled examples and creates a model that can generalize these examples. Thus, predictions can be made for a set of new unlabeled examples. The key feature of EL is that via combining various models, the errors coming from a single inducer is likely to be compensated from other inducers. Accordingly, the prediction of the overall performance would be superior comparing to a single inducer [89]. This type of ML model was presented in all sub-categories, apart from crop quality, perhaps owing to the small number of papers belonging in this subcategory. Support Vector Machine (SVM) followed, contributing in approximately 11.5% of the studies. The strength of the SVM stems from its capability to accurately learn data patterns while showing reproducibility. Despite the fact that it can also be applied for regression applications, SVM is a commonly used methodology for classification extending across numerous data science settings [90], including agricultural research.
Decision Trees (DT) and Regression models came next with equal percentage, namely 4.7%. Both these ML models were presented in all generic categories. As far as DT are concerned, they are either regression or classification models structured in a tree-like architecture. Interestingly, handling missing data in DT is a well-established problem. By implementing DT, the dataset can be gradually organized into smaller subsets, whereas, in parallel, a tree graph is created. In particular, each tree’s node denotes a dissimilar pairwise comparison regarding a certain feature, while each branch corresponds to the result of this comparison. As regards leaf nodes, they stand for the final decision/prediction provided after following a certain rule [91,92]. As for Regression, it is used for supervised learning models intending to model a target value on the basis of independent predictors. In particular, the output can be any number based on what it predicts. Regression is typically applied for time series modeling, prediction, and defining the relationships between the variables.
Finally, the ML models, leading to optimal performance (although with lower contribution to literature), were those of Instance Based Models (IBM) (2.7%), Dimensionality Reduction (DR) (1.5%), Bayesian Models (BM) (0.9%), and Clustering (0.3%). IBM appeared only in crop, water, and livestock management, whereas BM only in crop and soil management. On the other hand, DR and Clustering appeared as the best solution only in crop management. In brief, IBM are memory-based ML models that can learn through comparison of the new instances with examples within the training database. DR can be executed both in unsupervised and supervised learning types, while it is typically carried out in advance of classification/regression so as to prevent dimensionality effects. Concerning the case of BM, they are a family of probabilistic models whose analysis is performed within the Bayesian inference framework. BM can be implemented in both classification and regression problems and belong to the broad category of supervised learning. Finally, Clustering belongs to unsupervised ML models. It contains automatically discovering of natural grouping of data [12].

4.2.2. Most Studied Crops and Animals

In this sub-section, the most examined crops and animals that were used in the ML models are discussed as a result of our searching within the four sub-categories of crop management similarly to [12]. These sub-categories refer to yield prediction, disease detection, crop recognition, and crop quality. Overall, approximately 80 different crop species were investigated. The 10 most utilized crops are summarized in Figure 9. Specifically, the remarkable interest on maize (also known as corn) can be attributed to the fact that it is cultivated in many parts across the globe as well as its versatile usage (for example, direct consumption by humans, animal feed, producing ethanol, and other biofuels). Wheat and rice follow, which are two of the most widely consumed cereal grains. According to the Food and Agriculture Organization (FAO) [93], the trade in wheat worldwide is more than the summation of all other crops. Concerning rice, it is the cereal grain with the third-highest production and constitutes the most consumed staple food in Asia [94]. The large contribution of Asian countries presented in Figure 6, like China and India, justifies the interest in this crop. In the same vein, soybeans, which are broadly distributed in East Asia, USA, Africa, and Australia [95], were presented in many studies. Finally, tomato, grape, canola/rapeseed (cultivated primarily for its oil-rich seed), potato, cotton, and barley complete the top 10 examined crops. All these species are widely cultivated all over the world. Some other indicative species, which were investigated at least five times in the present reviewed studies, were also alfalfa, citrus, sunflower, pepper, pea, apple, squash, sugarcane, and rye.
As far as livestock management is concerned, the examined animal species can be classified, in descending order of frequency, into the categories of cattle (58.5%), sheep and goats (26.8%), swine (14.6%), poultry (4.9%), and sheepdog (2.4%). As can be depicted in Figure 10, the last animal, which is historically utilized with regard to the raising of sheep, was investigated only in one study belonging to animal welfare, whereas all the other animals were examined in both categories of livestock management. In particular, the most investigated animal in both animal welfare and livestock production was cattle. Sheep and goats came next, which included nine studies for sheep and two studies for goats. Cattles are usually raised as livestock aimed at meat, milk, and hide used for leather. Similarly, sheep are raised for meat and milk as well as fleece. Finally, swine (often called domestic pigs) and poultry (for example, chicken, turkey, and duck), which are used mainly for their meat or eggs (poultry), had equal contribution from the two livestock sub-categories.

4.2.3. Most Studied Features and Technologies

As mentioned in the beginning of this study, modern agriculture has to incorporate large amounts of heterogeneous data, which have originated from a variety of sensors over large areas at various spatial scale and resolution. Subsequently, such data are used as input into ML algorithms for their iterative learning up until modeling of the process in the most effective way possible. Figure 11 shows the features and technologies that were used in the reviewed studies, separately for each category, for the sake of better comprehending the results of the analysis.
Data coming from remote sensing were the most common in the yield prediction sub-category. Remote sensing, in turn, was primarily based on data derived from satellites (40.6% of the total studies published in this sub-category) and, secondarily, from UAVs (23.2% of the total studies published in this sub-category). A remarkable observation is the rapid increase of the usage of UAVs versus satellites from the year 2018 towards 2020, as UAVs seem to be a reliable alternative that can give faster and cheaper results, usually in higher resolution and independent of the weather conditions. Therefore, UAVs allow for discriminating details of localized circumscribed regions that the satellites’ lowest resolution may miss, especially under cloudy conditions. This explosion in the use of UAV systems in agriculture is a result of the developing market of drones and sensing solutions attached to them, rendering them economically affordable. In addition, the establishment of formal regulations for UAV operations and the simplification and automatization of the operational and analysis processes had a significant contribution on the increasing popularity of these systems. Data pertaining to the weather conditions of the investigated area were also of great importance as well as soil parameters of the farm at hand. An additional way of getting the data was via in situ manual measurements, involving measurements such as crop height, plant growth, and crop maturity. Finally, data concerning topographic, irrigation, and fertilization aspects were presented with approximately equal frequency.
As far as disease detection is concerned, Red-Green-Blue (RGB) images appear to be the most usual input data for the ML algorithms (in 62% of the publications). Normally, deep learning methods like CNNs are implemented with the intention of training a classifier to discriminate images depicting healthy leaves, for example, from infected ones. CNNs use some particular operations to transform the RGB images so that the desired features are enhanced. Subsequently, higher weights are given to the images having the most suitable features. This characteristic constitutes a significant advantage of CNNs as compared to other ML algorithms, when it comes to image classification [79]. The second most common input data came from either multispectral or hyperspectral measurements originated from spectroradiometers, UAVs, and satellites. Concerning the investigated diseases, fungal diseases were the most common ones with diseases from bacteria following, as is illustrated in Figure 12a. This kind of disease can cause major problems in agriculture with detrimental economic consequences [96]. Other examined origins of crop diseases were, in descending order of frequency, pests, viruses, toxicity, and deficiencies.
Images were also the most used input data for weed detection purposes. These images were RGB images that originated mainly from in situ measurements as well as from UGVs and UAVs and, secondarily, multispectral images from the aforementioned sources. Finally, other parameters that were observed, although with lower frequency, were satellite multispectral images, mainly due to the considerably low resolution they provide, video recordings, and hyperspectral and greyscale images. Concerning crop recognition, the majority of the studies used data coming mostly from satellites and, secondarily, from in situ manual measurements. This is attributed to the fact that most of the studies in this category concern crop classification, a sector where satellite imaging is the most widely used data source owing to its potential for analysis of time series of extremely large surfaces of cultivated land. Laboratory measurements followed, while RGB and greyscale images as well as hyperspectral and multispectral measurements from UAVs were observed with lower incidence.
The input data pertaining to crop quality consisted mainly of RGB images, while X-ray images were also utilized (for seed germination monitoring). Additionally, quality parameters, such as color, mass, and flesh firmness, were used. There were also two studies using spectral data either from satellites or spectroradiometers. In general, the studies belonging in this sub-category dealt with either crop quality (80%) or seed germination potential (20%) (Figure 12b). The latter refers to the seed quality assessment that is essential for the seed production industry. Two studies were found about germination that both combined X-ray images analysis and ML.
Concerning soil management, various soil properties were taken into account in 65.7% of the studies. These properties included salinity, organic matter content, and electrical conductivity of soil and soil organic carbon. Usage of weather data was also very common (in 48.6% of the studies), while topographic and data pertaining to the soil moisture content (namely the ratio of the water mass over the dry soil) and crop properties were presented with lower frequency. Additionally, remote sensing, including satellite and UAV multispectral and hyperspectral data, as well as proximal sensing, to a lesser extent, were very frequent choices (in 40% of the studies). Finally, properties associated with soil temperature, land type, land cover, root microbial dynamics, and groundwater salinity make up the rest of data, which are labeled as “other” in the corresponding graph of Figure 11.
In water management, weather data stood for the most common input data (appeared in the 75% of the studies), with ET being used in the vast majority of them. In many cases, accurate estimation of ET (the summation of the transpiration via the plant canopy and the evaporation from plant, soil, and open water surface) is among the most central elements of hydrologic cycle for optimal management of water resources [97]. Data from remote sensors and measurements of soil water content were also broadly used in this category. Soil water availability has a central impact on crops’ root growth by affecting soil aeration and nutrient availability [98]. Stem water potential, appearing in three studies, is actually a measure of water tension within the xylem of the plant, therefore functioning as an indicator of the crop’s water status. Furthermore, in situ measurements, soil, and other parameters related to cumulative water infiltration, soil and water quality, field topography, and crop yield were also used, as can be seen in Figure 11.
Finally, in what concerns livestock management, motion capture sensors, including accelerometers, gyroscopes, and pedometers, were the most common devices giving information about the daily activities of animals. This kind of sensors was used solely in the studies investigating animal welfare. Images, audio, and video recordings came next, however, appearing in both animal welfare and livestock production sub-categories. Physical and growth characteristics followed, with slightly less incidence, by appearing mainly in livestock production sub-category. These characteristics included the animal’s weight, gender, age, metabolites, biometric traits, backfat and muscle thickness, and heat stress. The final characteristic may have detrimental consequences in livestock health and product quality [99], while through the measurement of backfat and muscle thickness, estimations of the carcass lean yield can be made [100].

5. Discussion and Main Conclusions

The present systematic review study deals with ML in agriculture, an ever-increasing topic worldwide. To that end, a comprehensive analysis of the present status was conducted concerning the four generic categories that had been identified in the previous review by Liakos et al. [12]. These categories pertain to crop, water, soil, and livestock management. Thus, by reviewing the relative literature of the last three years (2018–2020), several aspects were analyzed on the basis of an integrated approach. In summary, the following main conclusions can be drawn:
  • The majority of the journal papers focused on crop management, whereas the other three generic categories contributed almost with equal percentage. Considering the review paper of [12] as a reference study, it can be deduced that the above picture remains, more or less, the same, with the only difference being the decrease of the percentage of the articles regarding livestock from 19% to 12% in favor of those referring to crop management. Nonetheless, this reveals just one side of the coin. Taking into account the tremendous increase in the number of relative papers published within the last three years (in particular, 40 articles were identified in [12] comparing to the 338 of the present literature survey), approximately 400% more publications were found on livestock management. Another important finding was the increasing research interest on crop recognition.
  • Several ML algorithms have been developed for the purpose of handling the heterogeneous data coming from agricultural fields. These algorithms can be classified in families of ML models. Similar to [12], the most efficient ML models proved to be ANNs. Nevertheless, in contrast to [12], the interest also been shifted towards EL, which can combine the predictions that originated from more than one model. SVM completes the group with the three most accurate ML models in agriculture, due to some advantages, such as its high performance when it works with image data [101].
  • As far as the most investigated crops are concerned, mainly maize and, secondarily, wheat, rice, and soybean were widely studied by using ML. In livestock management, cattle along with sheep and goats stood out constituting almost 85% of the studies. Comparing to [12], more species have been included, while wheat and rice as well as cattle, remain important specimens for ML applications.
  • A very important result of the present review study was the demonstration of the input data used in the ML algorithms and the corresponding sensors. RGB images constituted the most common choice, thus, justifying the broad usage of CNNs due to their ability to handle this type of data more efficiently. Moreover, a wide range of parameters pertaining to weather as well as soil, water, and crop quality was used. The most common means of acquiring measurements for ML applications was remote sensing, including imaging from satellites, UAVs and UGVs, while in situ and laboratory measurements were also used. As highlighted above, UAVs are constantly gaining ground against satellites mainly because of their flexibility and ability to provide images with high resolution under any weather conditions. Satellites, on the other hand, can supply time-series over large areas [102]. Finally, animal welfare-related studies used mainly devices such as accelerometers for activity recognition, whereas those ones referring to livestock production utilized primary physical and growth characteristics of the animal.
As can be inferred from the geographical distribution (illustrated in Figure 6) in tandem with the broad spectrum of research fields, ML applications for facilitating various aspects of management in the agricultural sector is an important issue on an international scale. As a matter of fact, its versatile nature favors convergence research. Convergence research is a relatively recently introduced approach that is based on shared knowledge between different research fields and can have a positive impact on the society. This can refer to several aspects, including improvement of the environmental footprint and assuring human’s health. Towards this direction, ML in agriculture has a considerable potential to create value.
Another noteworthy finding of the present analysis is the capturing of the increasing interest on topics concerning ML analyses in agricultural applications. More specifically, as can be shown in Figure 13, an approximately 26% increase was presented in the total number of the relevant studies, if a comparison is made between 2018 and 2019. The next year (i.e., 2020), the corresponding increase jumped to 109% against 2019 findings; thus, resulting in an overall 164% rise comparing with 2018. The accelerating rate of the research interest on ML in agriculture is a consequence of various factors, following the considerable advancements of ICT systems in agriculture. Moreover, there exists a vital need for increasing the efficiency of agricultural practices while reducing the environmental burden. This calls for both reliable measurements and handling of large volumes of data as a means of providing a wide overview of the processes taking place in agriculture. The currently observed technological outbreak has a great potential to strengthen agriculture in the direction of enhancing food security and responding to the rising consumers’ demands.
In a nutshell, ICT in combination with ML, seem to constitute one of our best hopes to meet the emerging challenges. Taking into account the rate of today’s data accumulation along with the advancement of various technologies, farms will certainly need to advance their management practices by adopting Decision Support Systems (DSSs) tailored to the needs of each cultivation system. These DSSs use algorithms, which have the ability to work on a wider set of cases by considering a vast amount of data and parameters that the farmers would be impossible to handle. However, the majority of ICT necessitates upfront costs to be paid, namely the high infrastructure investment costs that frequently prevent farmers from adopting these technologies. This is going to be a pressing issue, mainly in developing economies, where agriculture is an essential economic factor. Nevertheless, having a tangible impact is a long-haul game. A different mentality is required by all stakeholders so as to learn new skills, be aware of the potential profits of handling big data, and assert sufficient funding. Overall, considering the constantly increasing recognition of the value of artificial intelligence in agriculture, ML will definitely become a behind-the-scenes enabler for the establishment of a sustainable and more productive agriculture. It is anticipated that the present systematic effort is going to constitute a beneficial guide to researchers, manufacturers, engineers, ICT system developers, policymakers, and farmers and, consequently, contribute towards a more systematic research on ML in agriculture.

Author Contributions

Conceptualization, D.B.; methodology, L.B., G.D., R.B., D.K. and A.C.T.; investigation, L.B. and G.D.; writing—original draft preparation, L.B. and A.C.T.; writing—review and editing, L.B., G.D., D.K., A.C.T., R.B. and D.B.; visualization, L.B.; supervision, D.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been partly supported by the Project “BioCircular: Bio-production System for Circular Precision Farming” (project code: T1EDK- 03987) co-financed by the European Union and the Greek national funds through the Operational Programme Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH—CREATE—INNOVATE.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this section, the reviewed articles are summarized within the corresponding Tables as described in Table 2.
Table A1. Crop Management: Yield Prediction.
Table A1. Crop Management: Yield Prediction.
RefCropInput DataFunctionalityModels/AlgorithmsBest Output
[103]CoffeeWeather data, soil fertilityPrediction of Robusta coffee yield by using various soil fertility propertiesELM, RF, MLRELM: Model with SOM, K, S:
RMSE = 496.35 kgha−1, MAE = 326.40 kgha−1
[104]MaizeWeather and satellite spectral dataSilage maize yield estimation via Landsat 8 OLI dataBRT, RFR, SVR, GPRBRT: R = 0.89, RMSE = 4.66
[105]MaizeSoil properties, topographic, multispectral aerial imagesPrediction of corn yield and soil properties (SOM, CEC, Mg, K, pH)RF, ANN, SVM, GBM, Cubist(1) Corn yield: RF (R2 = 0.53); (2) SOM: NN (R2 = 0.64); (3) CEC: NN (R2 = 0.67); (4) K: SVM (R2 = 0.21); (5) Mg: SVM (R2 = 0.22); (6) pH: GBM (R2 = 0.15)
[106]CottonSatellite spectral dataCotton yield estimationANN(1) 2013: Yield vs. CI (R = −0.2–0.60), best ANN (R = 0.68); (2) 2014: Yield vs. CI (R = −0.79–0.84), best ANN (R = 0.86)
[107]AppleRGB imagesDetection and estimation of the number of apples in canopy imagesMLRYield relative error = −10–13%,
Yield relative error STD = 28% of average tree yield
[108]MaizeCrop data—CERES model, satellite spectral dataForecasting spring maize yield from Landsat-8 imagesSVM, RF, DT, LDA, KNNRS: SVM: Acc = 97%, RMSE = 397 kgha−1
[109]Maize, soybeanSatellite spectral dataEstimation of corn and soybean yield via Landsat and SPOT imagesMLR, ANNR2 values: (1) Maize: ANN: 0.92, (2) Soybean: ANN: 0.90
[110]TurmericSoil fertility, weather dataForecasting oil yield produced from turmeric rhizomesANNΜultilayer-feed-forward NN with 12 nodes: R2 = 0.88
[111]SunflowerPlant height, SPADPrediction of sunflower seed yieldPLSR, ANN(1) ANN: RMSE = 0.66 tha−1, R2 = 0.86; (2) PLSR: RMSE = 0.93 tha−1, R2 = 0.69
[112]PistachioIrrigation, soil characteristicsEstimation of pistachio yield in orchardsMLR, ANNAcc values: ANN: 90%, MLR: 28%
[113]RiceWeather data, irrigation, planting area, fertilizationEvaluation of feature subsets for prediction of paddy crop yieldANN, SVR, KNN, RFForward Feature Selection:
RF: RMSE = 0.085, MAE = 0.055, R = 0.93
[114]PotatoSatellite spectral dataPrediction of potato yield via Sentinel 2 satellite dataMLR, RQL, LB, SVM, RF, MARS, KNN, ANN(1) Reduced dataset: LB: MAE = 8.95%, R2 = 0.89; (2) No feature selection: SVM: MAE = 8.64%, R2 = 0.93; (3) 1–2 months prior to harvest: RF: MAE = 8.71%, R2 = 0.89
[115]WheatSatellite spectral dataPrediction of wheat yieldSVM, RF, ANNR2 values: (1) SVM: 0.74; (2) RF: 0.68; (3) ANN: 0.68
[116]Soybean, MaizeHydrological, weather and satellite spectral dataPrediction of soybean and corn yieldsDNN, RF, SVM, MARS, ERT, ANNDNN (1) Corn: 21–33% more accurate (2) Soybean: 17–22% more accurate
[117]Wheat, barleyMultispectral images from UAVPrediction of barley and wheat yieldsCNN(1) Early growth phase(<25%):
MAE = 484.3 kgha−1, MAPE = 8.8%; (2) Later growth phase(>25%): MAE = 484.3 kgha−1, MAPE = 8.8%
[118]StrawberryMultispectral images from UAVDetection and counting of strawberry species for yield predictionCNNFaster RCNN: (1) Detection: MaP = 0.83 (at 2 m), MaP = 0.72 (at 3 m); (2) Count: Acc = 84.1%, Average occlusion = 13.5%
[119]RiceWeather data, irrigation, planting area, fertilizationPrediction of paddy fields yieldANN, MLR, SVR, KNN, RFANN-MLR: R = 0.99, RMSE = 0.051, MAE = 0.041
[120]SoybeanWeather and satellite spectral dataPrediction of soybean yield in 15 states of USACNN, LSTM2011–2015: End-of-season
RMSE = 329.53 kgha−1, R2 = 0.78
[121]MaizeSatellite spectral dataPrediction of maize yieldMLR, RF, SVMRF: (1) yield: R2 = 0.6; (2) GNDVI: R2 = 0.48;
Best monitoring period:
Crop age = 105–135 days
[122]MangoMultispectral data from UGVEstimation of mango maturity level by simulating imaging devices of optical filtersSVMEstimation of dry matter by using a 4-sensor device with 4 filters: R2 = 0.69
[123]Rapeseed, barley, wheatEC, STI, gamma radiometrics and weather dataForecasting crop yieldRFRMSE = 0.36–0.42 t/ha, Lin’s CCC = 0.89–0.92
[53]MaizeGenetic information of hybrids, soil and weather dataPrediction of maize yieldDNN(1) With predicted weather data: RMSE = 12% of average yield, 50% of STD; (2) Using ideal weather data: RMSE = 11% of average yield, 46% of STD
[124]RiceRGB leaf imagesPrediction of nutrient deficiencies (P, N, K) in image leaves from paddy fieldsANNAcc = 77%
[125]RiceRGB and multispectral images from UAVEstimation of rice grain yieldCNNR2 values: (1) Only RGB images: 0.424–0.499; (2) RGB and multispectral images: 0.464–0.511
[126]MaizeSatellite spectral data, crop modeling dataEstimation of end-of-season and early maize yieldRF(1) Early maize yield: R2 = 0.53, RMSE = 271 kgha−1, MAE = 202 kgha−1; (2) End-of-season maize yield: R2 = 0.59, RMSE = 258 kg ha−1, MAE = 201 kgha−1
[127]PotatoSoil parameters and tillage treatmentsForecasting of organic potato yieldANN, MLR(1) MLR: R2 = 0.894, RMSE = 0.431, MAE = 0.327; (2) ANN: R2 = 0.95, RMSE = 0.431, MAE = 0.327
[128]MaizeSimulations data, weather and soil dataPrediction of crop yield based on gridded crop meta-modelsRF, XGBoost(1) XGBoost: (a) growing season climate: R2 = 0.91, MAE = 0.74, (b) annual climate: R2 = 0.92, MAE = 0.66: (2) RF: (a) growing season climate: R2 = 0.94, MAE = 0.71, (b) annual climate: R2 = 0.95, MAE = 0.58
[129]SoybeanSatellite spectral data, precipitation and daytimeForecasting soybean yieldRF, multivariate OLS, LSTM(1) DOY 16: OLS: MAE = 0.42 Mgha−1; (2) DOY 32: LSTM: MAE = 0.42 Mgha−1; (3) DOY 48: LSTM: MAE = 0.25 Mgha−1; (4) DOY 64: LSTM: MAE = 0.24 Mgha−1
[130]PotatoTopography, soil EC, soil chemistry and multispectral data from ground based sensorsPotato tuber yield prediction via ground based proximal sensingLR, KNN, EN, SVRBest models: (1) SVR: 2017: (a) New Brunswick: RMSE = 5.97 tha−1, (b) Prince Edward Island: RMSE = 6.60 tha−1; (2) 2018: (a) New Brunswick RMSE = 4.62 tha−1, (b) Prince Edward Island: RMSE = 6.17 tha−1
[131]Rice, maize,
millet, ragi
Weather dataPrediction of various kharif crops yieldMANN, SVROverall RMSE = 79.85%
[132]WheatSoil, weather, and satellite spectral dataWinter wheat prediction from four mid-season timingsRF, GPR, SVM, ANN, KNN, DT, BT(1) RF: R2 = 0.81, RMSE = 910–920 kgha−1, MAE = 740 kgha−1; (2) GPR: R2 = 0.78, RMSE = 920–960 kgha−1, MAE = 735–767 kgha−1
[133]MaizeData derived from various cropping systemsMaize grain yield prediction from CA and conventional cropping systems LDA, MLR, GNB, KNN, CART, SVMBest results: LDA: Acc = 0.61, Precision = 0.59, Recall = 0.59, F1-score = 0.59
[134]SoybeanMultispectral, RGB and thermal images from UAVEstimation of soybean grain yieldDNN, PLSR, RFR, SVRDNN: (1) Intermediate-level feature fusion: R2 = 0.720, Relative RMSE = 15.9%; (2) input-level feature fusion: R2 = 0.691,
Relative RMSE = 16.8%
[135]Soybean, MaizeWeather data and soil dataSoybean and corn yield forecastingCNN-RNN, RF, LASSO, DNNCNN-RNN: RMSE values (bushels/acre): (1) Soybean: 2016: 4.15, 2017: 4.32, 2018: 4.91; (2) Maize: 2016: 16.48, 2017: 15.74, 2018: 17.64
[136]GrapeMultispectral images from UAVEstimation of vineyard final yieldMLP(1) Only NDVI: RMSE = 1.2 kg/vine, Relative error = 28.7%; (2) Both NDVI ANF VFC: RMSE = 0.9 kg/vine,
Relative error = 21.8%
[137]RiceSatellite spectral dataPrediction of rice crop yieldRF, SVM(1) HD NDVI: RF: RMSE = 11.2%,
MAE = 9.1%, SVM: RMSE = 8.7%, MAE = 5.6%; (2) HDM NDVI: RF: RMSE = 11.3%, MAE = 9.2%, SVM: RMSE = 8.7%, MAE = 5.6%
[138]MaizeFertilization, planting density, soil EC, satellite spectral dataPrediction of corn yield response to nitrogen and seed rate managementCNNAverage value for 9 fields in the USA: RMSE = 0.7
[139]SugarcaneMonthly precipitation data Forecasting of sugarcane yield RNNRMSE = 0.31 tha−1, MAE = 0.39 tha−1, MAPE = 5.18%
[140]WheatSatellite spectral and weather dataEstimation of wheat yieldSVR, RF, Cubist, XGBoost, MLP, GPR, KNN, MARSSVR: RMSE = 0.55 tha−1, R2 = 0.77
[141]Maize, SoybeanSatellite spectral dataForecasting of maize and soybean yield MLR, ANNANN: (1) Corn: RMSE = 4.83–8.41, R = 0.91–0.99; (2) Soybean: RMSE = 5.18–7.77, R = 0.79–0.99
[142]MaizeSatellite spectral and weather dataPrediction of maize yield under severe weather conditions DNN(1) Drought cases: R = 0.954; (2) Heatwave cases: R = 0.887–0.914
[143]RiceWeather dataPaddy yield predictionANNR = 0.78–1.00,
MSE = 0.040–0.204
[144]MaizePlant population, soil and weather dataMaize yield forecasting in 3 US states of Corn BeltXGBoost, RF, LASSO, GBM, WELWEL: RMSE = 1.138 kgha−1
[145]MaizeSatellite spectral and weather dataEstimation of maize yield DLSR2 = 0.76, RMSE = 0.038 tha−1
[146]Various cropsSatellite spectral and weather dataPrediction of autumn crops yieldSVR, RF, DNNRMSE values (×104 tons)
SVR = 501.98; RF = 477.45; DNN = 253.74
[147]WheatMultispectral images from UAVGrowth monitoring and yield prediction of wheat in key growth stagesLR, SMLR, PLSR, ANN, RFBest results: RF:
R2 = 0.78, RMSE = 0.103
[148]CottonTopographic, weather, soil and satellite spectral dataWithin-field yield predictionRF, GBBest results: RF: RMSE = 0.20 tha−1, CCC = 0.50–0.66
[149]CottonSatellite spectral dataYield prediction RF, CARTRF: RMSE = 62.77 Kg ha−1, MAPE = 0.32
[150]RiceMultispectral images from UAVPrediction of rice grain yieldRFRMSE = 62.77 Kg ha−1, MAPE = 0.32
[151]SoybeanMultispectral images from UAVYield estimation in soybeanMLPR = 0.92
[152]PotatoWeather, irrigation, and satellite spectral dataForecasting of yield in potato fields at municipal levelRF, SVM, GLM(1) winter cycle: R2 = 0.757, %RMSE = 18.9; (2) summer cycle; R2 = 0.858, %RMSE = 14.9
[153]SugarcaneSatellite spectral dataPrediction of sugarcane yieldMLRR2 = 0.92–0.99
[154]CottonMultispectral images from UAVEstimation of cotton yieldANN, SVR, RFRANN: R2 = 0.9
[155]RiceWeather and soil dataPrediction of rice yields from Blockchain nodesRF, MLR, GBR, DTRRF: R2 = 0.941, %RMSE = 0.62, MAE = 0.72
[156]MaizeMultispectral images from UAVPrediction of maize yield at specific phenological stagesGBStage V10: R2 = 0.90; Stage VT: R2 = 0.93
[157]WheatSatellite spectral and weather data, soil hydraulic propertiesForecasting of wheat yieldRF, MLRRF: 1 month before harvest: R = 0.85, RMSE = 0.70 tha−1, ROC = 0.90
[158]MaizeSoil and weather dataEstimation of maize yield with publicly available dataLSTM, LASSO, RF, SVR, AdaBoostLSTM: MAE = 0.83 (buac−1), MAPE = 0.48%
[159]RiceSoil and weather dataFinding optimal features gathering for forecasting paddy yieldRF, DT, GBMRF: MSE = 0.07, R2 = 0.67;
[160]AlfalfaHyperspectral data from UAVIn-season alfalfa yield forecastCombination of RF,
SVR, KNN
R2 = 0.874
[161]MaizeMultispectral images from UAVYield prediction of maizeBPNN, SVM, RF, ELMSVM: RMSE = 1.099, MAE = 0.886
[162]MenthaSatellite spectral data, field inventory data (soil, plant height, biomass)Mentha crop biomass forecastingMLPR2 = 0.762, RMSE = 2.74 th−1
[163]WheatMultispectral images from UAVPrediction of wheat grain yieldLR, RF, SVM, ANNLR: RMSE = 972 kgha−1, R2 = 0.62
[164]MaizeMultispectral images from UAVPrediction of maize yieldRF, RF+R, RF+BAG, SVM, LR, KNN, ANNRF: R = 0.78, MAE = 853.11 kgha−1
[165]PotatoHyperspectral data from UAVYield prediction at two growth stagesRF, PLSRRF: R2 = 0.63, MAE = 853.11 kgha−1
[166]CarrotSatellite spectral dataCarrot yield MappingRFR2 = 0.82, RMSE = 2.64 Mgha−1; MAE = 1.74 Mgha−1
[167]Soybeanmultispectral images from UAVPredicting yieldDTRMSE = 196 kgha−1
[168]WheatSatellite spectral, soil and weather dataWinter wheat yield prediction at a regional levelCombination of LSTM and CNNR2 = 0.75, RMSE = 732 kgha−1;
[169]PotatoHyperspectral data from UAVYield prediction at two growth stagesRF, PLSRR2 values: RF: 0.63; PLSR: 0.81
[170]WheatSatellite spectral and weather dataWinter yield prediction in the Conterminous United StatesOLS, LASSO, SVM, RF, AdaBoost, DNNAdaBoost: R2 = 0.86, RMSE = 0.51 tha−1, MAE = 0.39 tha−1
Acc: Accuracy: CA: Conservation Agriculture; CI: Crop Indices; CEC: Cation Exchange Capacity; CCC: Concordance Correlation Coefficient; DOY: Day Of Year; EC: Electrical Conductivity; HD: Heading Date; HDM: Heading Date to Maturity; K: Potassium; Mg: Magnesium; N: Nitrogen; OLI: Operational Land Imager; P: Phosphorus; RGB: Red-Green-Blue; S: Sulphur; SOM: Soil Organic Matter; SPAD: Soil and Plant Analyzer Development; STI: Soil Texture Information; STD: Standard Deviation; UAV: Unmanned Aerial Vehicle; UGV: Unmanned Ground Vehicle.
Table A2. Crop Management: Disease Detection.
Table A2. Crop Management: Disease Detection.
RefCropInput DataFunctionalityModels/AlgorithmsBest Output
[171]Various cropsRGB imagesDetection and diagnosis of plant diseasesCNNAcc = 99.53%
[172]MelonFluorescence, thermal imagesDetection of Dickeya dadantii in melon plantsLR, SVM, ANNANN: Whole leaves: Acc = 96%; F1 score = 0.99
[173]TomatoRGB imagesRecognition of 10 plant diseases and pests in tomato plantsCNNRecognition rate = 96%
[174]AvocandoHyperspectral imagesDetection of nitrogen and iron deficiencies and laurel wilt disease in avocandoDT, MLPMLP: Detection at early stage: Acc = 100%
[175]MaizeRGB imagesExamination of nine factors affecting disease detection in maize fieldsCNNAcc values: (1) Original dataset: 76%; Background removed: 79%; (2) Subdivided (full): 87%; (3) Subdivided (reduced): 81%
[176]Milk thistleSpectral measurements form spectroradiometerIdentification of Microbotryum silybum in milk thistle plantsMLP-ARDAcc = 90.32%
[177]TomatoSpectral measurements form spectroradiometerDetection of leaf diseases (target, bacterial spots and late blight) in tomatoKNNAcc values: (1) Healthy leaves: 100%, (2) Asymptomatic: 100%, (3) Early stage: 97.8%, (4) Late stage: 100%
[178]MaizeRGB imagesIdentification of eight types of leaf diseases in maizeCNN(1) GoogLeNet:
Acc = 98.9%; (2) Cifar10: Acc = 98.8%
[179]Various cropsRGB imagesIdentification of six plant leaf diseasesRBFN(1) Early blight: Acc = 0.8914; (2) Common rusts: Acc = 0.8871
[180]CitrusRGB imagesDetection and classification of citrus diseasesSVMAcc values: 1st dataset: 97%; 1st and 2nd dataset: 89%; 3rd dataset: 90.4%
[181]GrapeMultispectral images from UAVIdentification of infected areasCNN(1) Color space YUV: Acc = 95.84%; (2) Color space YUV and ExGR: Acc = 95.92%
[182]SoybeeanRGB imagesDetection and classification of three leaf diseases in soybeansSVM(1) Healthy: Acc = 82%; (2) Downy mildew: Acc = 79%; (3) Frog eye: Acc = 95.9%; (4) Septoria leaf blight: Acc = 90%
[183]MilletRGB imagesIdentification of fungal disease (mildew) in pearl milletCNNAcc = 95.00%, Precision = 90.50%, Recall = 94.50%, F1 score = 91.75%
[184]MaizeRGB images from UAVDetection of northern leaf blight in maizeCNNAcc = 95.1%
[185]WheatRGB images from UAVClassification of helminthosporium leaf blotch in wheatCNNAcc = 91.43%,
[186]AvocadoRGB images, multispectral imagesDetection of laurel wilt disease in healthy and stressed avocado plants in early stageMLP, KNNHealthy vs. Nitrogen deficiency using 6 bands images: (1) MLP: Acc = 98%; (2) KNN: Acc = 86%
[187]BasilRGB imagesIdentification and classification of five types of leave diseases in four kinds of basil leavesDT, RF, SVM, AdaBoost, GLM, ANN, NB, KNN, LDARF: Acc = 98.4%
[188]Various cropsRGB imagesIdentification of several diseases on leavesCNNAcc values: (1) Healthy: 89%; (2) Mildly diseased: 31%; (3) Moderately diseased: 87%; (4) Severely diseased: 94%
[189]TeaRGB images from UAVIdentification of tea red Scab, tea leaf blight and tea red leaf spot diseases in tea leavesSVM, DT, RF, CNNCNN: Acc values: (1) tea red Scab: 0.7; (2) tea leaf blight: 1.0; (3)tea red leaf spot: 1.0
[190]WheatHyperspectral images from UAVDetection of yellow rust in wheat plotsCNNAcc = 0.85
[191]GrapeRGB imagesDetection of grapevine yellows in red grapesCNNSensitivity = 98.96%
Specificity = 99.40%
[192]MaizeRGB images from UAVDetection of northern leaf blight in maizeCNNAcc = 0.9979,
F1 score = 0.7153
[193]Sugar beetRGB imagesDetection and classification of diseased leaf spots in sugar beetCNNAcc = 95.48%
[194]Various cropsRGB imagesIdentification of various plant leaf diseasesCNNAcc = 96.46%
[195]StrawberryRGB imagesDetection of powdery mildew in strawberry leavesLDA(1) Artificial lighting conditions:
recall = 95.26%, precision = 95.45%, F1 score = 95.37%; (2) Natural lighting conditions: recall = 81.54%, precision = 72%, F1 score = 75.95%
[196]Various different cropsRGB imagesDetection of diseased plantsDLAcc = 93.67%
[197]CitrusHyperspectral images from UAVDetection of canker disease on leaves and immature fruitsRBFN,
KNN
RBFN: Acc values: (a) asymptomatic: 94%, (b) early stage: 96%, (c) late stage: 100%
[198]GrapeRGB imagesDetection of diseased vine on leavesSVMAcc = 95%
[199]WheatRGB imagesIdentification of three leaf diseases in wheatCNNAcc values: (1) Septoria: 100%; (2) Tan Spot: 99.32%; (3) Rust: 99.29%
[200]GrapeSpectral measurements form spectroradiometerClassification of Flavescence dorée disease in grapevinesSVM, LDASVM: Acc = 96%
[201]PapayaRGB imagesRecognition of five papaya diseasesSVMAcc = 90%, Precision = 85.6%
[202]RiceRGB imagesRecognition and classification of rice infected leavesKNN, ANNANN: Acc = 90%, Recall = 88%
[203]TomatoHyperspectral images from UAVDetection of bacterial spot and target spot on tomato leavesMLP, STDAMLP: Acc values: (a) bacterial spot: 98%, (b) target spot: 97%
[204]SquashHyperspectral images from UAV and laboratory measurementsClassification of powdery mildew in squashRBFN Acc values: (1) Laboratory: Asymptomatic: 82%, Late stage: 99%; (2) Field conditions: Early stage: 89%, Late disease stage: 96%
[205]TomatoHyperspectral images from UAV and laboratory measurementsDetection of bacterial spot and target spot on tomato leavesRBFN, STDA Field conditions: Acc values: (a) Healthy vs. BS: 98%, (b) Healthy vs. TS: 96%, (c) Healthy vs. TYLC: 100%
[206]TomatoRGB imagesIdentification of various diseases in tomatoCNNAcc values: (1) PV dataset: 98.4%; (2) 2nd dataset: 98.7%; (3) Field data: 86.27%
[79]WalnutRGB imagesIdentification of anthracnose infected leavesCNNAcc values: (1) RGB: 95.97%; (2) Grayscale: 92.47%; (3) Fast Fourier: 92.94%
[207]Various cropsRGB imagesClassification of infected leavesDBNAcc = 0.877, Sensitivity = 0.862, Specificity = 0.877
[208]GrapeMultispectral images from UAVDetection of Mildew disease in vineyardsCNNAcc values: (1) Grapevine-level: 92%; (2) Leaf level: 87%
[209]RiceRGB images, videosVideo detection of brown spot, stem borer and sheath blight in riceCNN(1) Brown spot: Recall = 75.0%,
Precision = 90.0%; (2) Stem borer:
Recall = 45.5%, Precision = 71.4%;
(3) Sheath blight: Recall = 74.1%,
Precision = 90.9%
[210]CassavaRGB imagesDetection and classification of diseased leaves of fine-grain cassavaCNNAcc = 93%
[211]BananaSatellite spectral data, Multispectral images from UAV, RGB images from UAVDetection of banana diseases in different African landscapesRF, SVMRF: Acc = 97%, omissions error = 10%; commission error = 10%; Kappa coefficient = 0.96
[212]TomatoRGB imagesDetection of early blight, leaf mold and late blight on tomato leavesCNNAcc = 98%
[213]PepperSpectral reflectance at 350–2500 nmDetection of fusarium disease in pepper leavesANN, NB, KNNΚNN: Average success rate = 100%
[214]TomatoSpectral measurements form spectroradiometerDetection of fusarium disease on pepper leavesCNNAcc = 98.6%
[215]CitrusMultispectral images from UAVDetection of citrus greening in citrus orchardsSVM, KNN, MLR, NB, AdaBoost, ANNAdaBoost: Acc = 100%
[216]SoybeanRGB imagesPrediction of charcoal rot disease in soybeanGBTSensitivity = 96.25%, specificity = 97.33%
[217]WheatRGB images from UAVDetection of wheat lodging RF, CNN, SVMCNN: Acc = 93%
[218]TomatoWeather dataPrediction of powdery mildew disease in tomato plantsELMAcc = 89.19%, AUC = 88.57%
[219]SoybeanRGB imagesDiagnosis of soybean leaf diseasesCNNAcc = 98.14%
[220]PotatoRGB imagesIdentification of early and late blight diseaseNB, KNN, SVMSVM: Average Acc = 99.67%
[221]Various cropsRGB imagesQuantification of uncertainty in detection of plant diseasesBDLMean softmax probability values: (1) Healthy: 0.68; (2) Non-Healthy: 0.72;
[222]CoffeeSatellite spectral dataIdentification of coffee berry necrosis via satellite imageryMLP, RF, NBNB: Acc = 0.534
[223]TomatoRGB imagesRecognition of blight, powdery mildew, leaf mold fungus and tobacco mosaic virus diseasesCNNFaster RCNN:
mAP = 97.01%
[224]MaizeRGB imagesDiagnosis of northern leaf blight, gray leaf spot, and common rust diseases CNNAcc = 98.2%; macro average precision = 0.98
[225]GrapeRGB imagesDetection of black measles, black rot, leaf blight and mites on leavesCNNmAP = 81.1%
[226]GrapeWeather data, expert input (disease incidence form visual inspection)Forecasting downy mildew in vineyardsGLM, LASSO, RF, GBGB: AUC = 0.85
[227]MaizeRGB imagesDetection of northern leaf blight in maizeCNNmAP = 91.83%
[228]OnionRGB imagesDetection of downy mildew symptoms in onions field imagesWSL[email protected] = 74.1–87.2%
[229]CoffeeRGB imagesDetection of coffee leaf rust via remote sensing and wireless sensor networksCNNF1 score = 0.775, p-value = 0.231
[230]TomatoWeather data, multispectral images captured from UAVDetection of late blight diseaseCNNAcc values: AlexNet: (1) Transfer learning: 89.69%; (2) Feature extraction: 93.4%,
[231]RiceRGB imagesDetection of brown rice planthopperCNNAverage recall rate = 81.92%, average Acc = 94.64%
[232]GrapeUAV multispectral images, depth map informationDetection of vine diseasesCNNVddNet: Accuracy = 93.72%
[233]AppleRGB imagesIdentification of apple leaf diseases (S, FS, CR)CNNImproved VGG16: Acc = 99.40%(H), 98.04% (S), 98.33%(FS), 100%(CR)
[234]CottonUAV multispectral imagesDisease classification of cotton root rotKM, SVMKM: Acc = 88.39%, Kappa = 0.7198
Acc: Accuracy; AUC: Area Under Curve; CR: Cedar Rust; ExGR: Excess Green Minus Excess Red; FS: Frogeye Spot; H: Healthy; mAP: mean Average Precision; RGB: Red-Green-Blue; S: Scab; TYLC: Tomato Yellow Leaf Curl; UAV: Unmanned Aerial Vehicle; VddNet: Vine Disease Detection Network.
Table A3. Crop Management: Weed Detection.
Table A3. Crop Management: Weed Detection.
RefInput DataFunctionalityModels/AlgorithmsBest Output
[235]RGB imagesClassification of thinleaf (monocots), broa leaf (dicots) weedsAdaBoost with NBAcc values: (1) Original dataset: 98.40%; (2) expanded dataset: 94.72%
[236]RGB images from UAVDetection of weeds in bean, spinach fieldsCNNAcc values: (1) Bean field: 88.73%;
(2) Spinach field: 94.34%
[237]RGB imagesDetection of four weed species in sugar beet fieldsSVN, ANNOverall Acc: SVM: 95.00%; Weed classification: SVM: 93.33%; Sugar beet plants: SVM: 96.67%
[238]RGB images from UAV, multispectral imagesDetection of Gramineae weed in rice fieldsANNBest system:
80% < M/MGT < 108%, 70% < MP < 85%
[239]RGB imagesClassification of crops (three species) and weeds (nine species)CNNAverage Acc: 98.21±0.55%
[240]Multispectral and RGB images from UAVWeed mapping between and within crop rows, (1) cotton; (2) sunflowerRFWeed detection Acc:
(1) Cotton: 84%
(2) Sunflower: 87.9%
[241]Hyperspectral imagesRecognition of three weed species in maize cropsRFMean correct classification rate: (1) Zea mays: 1.0; (2) Convolvulus arvensis: 0.789; Rumex: 0.691; Cirsium arvense 0.752
[242]RGB images from UAVDetection of weeds in early season maize fieldsRF Overall Acc = 0.945, Kappa = 0.912
[243]RGB images from UAVWeed mapping and prescription map generation in rice fieldFCNOverall Acc = 0.9196,
mean intersection over union (mean IU) = 0.8473
[244]Handheld multispectral dataWeed detection in maize and sugar beet row-crops with:
(1) spectral method; (2) spatial; (3) both methods
SVMMean detection rate: (1) spectral method: 75%; (2) spatial: 79%; (3) both methods: 89%
[245]Multispectral images from UAVDevelopment of Weed/crop segmentation, mapping framework in sugar beet fieldsDNNAUC: (1) background: 0.839; (2) crop: 0.681; (3) weed: 0.576
[246]RGB imagesClassification of potato plant and three weed speciesANNAcc = 98.1%
[247]RGB imagesEstimation of weed growth stage (18 species)CNNMaximum Acc = 78% (Polygonum spp.), minimum Acc = 46% (blackgrass), average Acc = 70% (the number of leaves) and 96% for deviation of two leaves
[248]Multispectral imagesClassification of corn (crop) and silver beet (weed)SVMPrecision = 98%, Acc = 98%
[249]RGB imagesClassification of Carolina Geranium within strawberry plantsCNN F1 score values: (1) DetectNet: (0.94, highest);
(2) VGGNet: 0.77;
(3) GoogLeNet: 0.62
[250]RGB imagesClassification of weeds in organic carrot productionCNNPlant-based evaluation:
Acc = 94.6%,
Precision = 93.20%,
Recall = 97.5%,
F1 Score = 95.32%
[251]Grayscale images from UGVRecognition of Broad-leaved dock in grasslandsCNN, SVMVGG-F: Acc = 96.8%
[252]Multispectral images from UAVMapping of Black-grass weed in winter wheat fieldsCNNBaseline model:
AUC = 0.78; Weighted kappa = 0.59; Average misclasssification rate = 17.8%
[253]RGB imagesSegmentation of rice and weed images at seedling stage in paddy fieldsFCNSemantic segmentation:
Average Acc rate = 92.7%
[254]RGB images from UGVCreation of multiclass dataset for classification of eight Australian rangelands weed speciesCNNRS-50: Average Acc = 95.7%, average inference time = 53.4 ms per image
[255]RGB imagesEvaluation of weed detection, spraying and mapping system. Two Scenarios: (1) artificial weeds, plants; (2) real weeds, plantsCNNScenario: (1) Acc = 91%, Recall = 91%; (2) Acc = 71%, Precision = 78% (for plant detection and spraying Acc)
[256]RGB imagesDetection of goldenrod weed in wild blueberry cropsLC, QCQC: Acc = 93.80%
[257]RGB imagesDetection of five weed species in turfgrassCNNPrecision values: Dollar weed: VGGNet (0.97); old world diamond-flower: VGGNet (0.99); Florida pusley: VGGNet (0.98); annual bluegrass: DetectNet (1.00)
[258]RGB imagesDetection of three weed species in perennial ryegrassCNNPrecision values: Dandelion: DetectNet (0.99); ground ivy: VGGNet (0.99), spotted spurge:
AlexNet (0.87)
[259]RGB images, multispectral images from UGVCrop-weed classification along with stem detectionFCNOverall: Mean precision = 91.3%, Mean recall = 96.3%
[260]RGB imagesIdentification of crops (cotton, tomato) and weeds (velvetleaf and nightsade)CNN, SVM, XGBoost, LRDensenet and SVM:
micro F1 score = 99.29%
[261]Videos recordingsClassification of two weeds species in rice fieldANN, KNNAcc values: Right channel (76.62%), Left channel (85.59%)
[262]RGB imagesWeed and crop discrimination in paddy fieldsMCS, SRF, SVMAcc values: Right channel (76.62%), Left channel (85.59%)
[263]Gray-scale and RGB imagesWeed and crop discrimination in carrot fieldsRFAcc = 94%
[264]Multispectral and RGB imagesDiscrimination of weed and crops with similar morphologiesCNNAcc = 98.6%
[265]RGB imagesDetection of C. sepium weed and sugar beet plantsCNNmAP = 0.751–0.829
[email protected] = 0.761–0.897
[266]RGB imagesRecognition of eight types of weeds in rangelandsCNN, RNNDeepWeeds dataset:
Acc = 98.1%
[267]Multispectral images from UAVWeed estimation on lettuce cropsSVM, CNNF1 score values: (1) SVM: 88%; (2) CNN-YOLOv3: 94%; (3) Mask R-CNN: 94%
[268]RGB imagesExamination of pre-trained DNN for improvements in weed identificationCNN(1) Xception: improvement = 0.51%; (2) Inception-Resnet: improvement = 1.89%
[269]RGB images from UAVDetection of five weeds in soybean fields CNNFaster RCNN: precision = 065, recall = 0.68, F1 score = 0.66, IoU = 0.85
[270]RGB imagesDetection of goose grass weed in tomato, strawberry fieldsCNN(1) Strawberry: (a) entire plant: F1 score = 0.75, (b) leaf blade: F1 score = 0.85;
(2) Tomato: (a) entire plant: F1 score = 0.56, (b) leaf blade: F1 score = 0.65
[271]Video recordingsDetection of five weed species in Marfona potato fieldsANNCorrect classification rate = 98.33%
[272]In situ measurements, satellite spectral dataIdentification of gamba grass in pasture fieldsXGBoostBalanced Acc = 86.9%
[273]RGB images from UAV, satellite spectral dataWeed maps creation in oat fieldsRFAcc values: (1) Subset A: 89.0%; (2) Subset B: 87.1%
[274]In situ measurements, RGB images from UAVIdentification of Italian ryegrass in early growth wheatDNNPresicion = 95.44%, recall = 95.48%, F score = 95.56%
[275]RGB images from UGVWeed detection evaluation of a spraying robot in potato fields on: (1) Image-level; (2) application-level; (3) field-levelCNNYOLOv3: (1) Image-level: recall = 57%, precision = 84%; (2) application-level: plants detected = 83%; (3) field-level: correct spraying = 96%
[276]RGB images from UGVDetection of four weed species in maize and bean cropsCNNAverage precision = 0.15–0.73
[277]RGB images from UAVDetection of Colchicum autumnale in grassland sitesCNNU-Net: Precision = 0.692, Recall = 0.886, F2 score = 0.839
[278]RGB images from UAVWeed mapping of Rumex obtusifolius in native grasslandsCNNVGG16: Acc = 92.1%, F1 score = 78.7%
Acc: Accuracy; AUC: Area under Curve; IoU: Intersection over Union; mAP: mean Average Precision; RGB: Red-Green-Blue; UAV: Unmanned Aerial Vehicle; UGV: Unmanned Ground Vehicle.
Table A4. Crop Management: Crop Recognition.
Table A4. Crop Management: Crop Recognition.
RefCropInput DataFunctionalityModels/AlgorithmsBest Output
[279]Various cropsSatellite spectral dataClassification of early-season cropsRFBeginning of growth stage: acc = 97.1%, kappa = 93.5%
[280]Various cropsSatellite spectral and phenological dataIdentification of various crops from remote sensing imagerySVM, RF, DFDF: (1) 2015: overall acc = 88%; (2) 2016: overall acc = 85%
[281]Maize, Rice, SoybeanSatellite spectral dataThree-dimensional classification of various cropsCNN, SVM, KNNCNN: (1) 2015: overall acc = 0.939, kappa = 0.902; (2) 2016: overall acc = 0.959, kappa = 0.924
[282]Various cropsSatellite spectral data, in situ dataIdentification of crops growing under plastic covered greenhousesDTOverall acc = 75.87%, Kappa = 0.63
[283]Various cropsSatellite data, phenological, in situ dataClassification of various cropsNB, DT, KMKM: overall acc = 92.04%, Kappa = 0.7998
[284]Cabbage, PotatoRGB images from UAV, in situ dataClassification of potato and cabbage cropsSVM, RFSVM: overall acc = 90.85%
[285]Various cropsSatellite spectral dataClassification of various cropsSVMOverall acc = 94.32%
[286]Various cropsSatellite spectral data, in situ dataClassification of various crops in large areasEBT, DT, WNNEBT: overall acc = 87%
[287]Various cropsSatellite spectral data, in situ dataClassification of various cropsSVMoverall acc = 92.64%
[288]Various cropsField location, in situ and satellite spectral dataClassification of six crops with small sample sizesFFNN, ELM, MKL, SVMMKL: accuracy = 92.1%
[289]Wolfberry, Maize, VegetablesSatellite spectral dataCrop classification in cloudy and rainy areasRNNLandsat-8: overall acc = 88.3%, Kappa = 0.86
[290]Maize, Canola, WheatSatellite spectral data, in situ dataCrop classificationRF, ANN, SVMRF: overall acc = 0.93, Kappa = 0.91
[291]Various cropsSatellite spectral dataClassification of various crop typesCombination of FCN-LSTMAcc = 86%, IoU = 0.64
[292]Various cropsSatellite spectral dataCrop classification of various cropsLightGBMHighest acc: 92.07%
[293]Maize, Peanut, Soybeans, RiceSatellite spectral and in situ dataPrediction of different crop typesFCN, SVM, RFBest crop mapping: FCN: acc = 85%, Kappa = 0.82
[294]Various cropsSatellite spectral and in situ dataClassification of early growth cropsCNN, RNN, RFHighest Kappa: 1D CNN: 0.942
[295]Various cropsSatellite spectral and in situ dataClassification of various cropsCNN, LSTM, RF, XGBoost, SVMCNN: acc = 85.54%, F1 score = 0.73
[296]Various cropsSatellite spectral dataClassification of parcel-based cropsLSTM, DCNDCN: overall acc = 89.41%
[297]Various cropsSatellite spectral dataClassification of crops in farmland parcel mapsLSTM, RF, SVMLSTM: overall acc = 83.67%, kappa = 80.91%
[298]Various cropsSatellite spectral data, in situ dataCrop classificationSVM, RF, CNN-RNN, GBMPixel R-CNN: acc = 96.5%
[299]Zea mays,
Canola, radish
Grayscale testbed dataClassification of the crops SVMQuadratic SVM: Precision = 91.87%, Recall = 91.85%, F1 score = 91.83%
[300]RiceMorphological dataClassification of two rice species (Osmancik-97 and Cammeo)LR, MLP, SVM, DT, RF, NB, KNNLR: acc = 93.02%
[301]SoybeanHyperspectral data, seed propertiesDiscrimination of 10 soybean seed varietiesTS-FFNN, SIMCA, PLS-DA, BPNNTS-FFNN in terms of identification Acc, stability and computational cost
[302]CottonHyperspectral data, seed propertiesIdentification of seven cotton seed varieties: (1) Full spectra, (2) Effective wavelengthsPLS-DA, LGR, SVM, CNN(1) Full spectra:
CNN-SoftMax: 88.838%;
(2) Effective wavelengths:
CNN-SVM: 84.260%
[303]Various plantsRGB images of leavesRecognition of 15 plant species of Swedish leaf datasetCNNMacro average: (1) Precision = 0.97, (2) Recall = 0.97, (3) F1 score = 0.97
[304]Various shrubs and treesRGB images of leavesIdentification of 30 shrub and trees speciesRF, SVM, AdaBoost, ANNSVM: acc = 96.5–98.4%
[305]Various plantsRGB images of leavesIdentification of seven plant speciesBPNN, SOM, KNN, SVMBPNN: Recognition rate = 92.47%
[306]Various crops Satellite spectral dataCrop classificationSVMSVM (RBF): overall acc values: (1) 2016: 88.3%; (2) 2017: 91%; (3) 2018: 85.00%
[307]Various cropsSatellite spectral dataCrop classificationFCN3D FCN: overall acc = 97.56%, Kappa = 95.85%
[308]Cotton, Rice, Wheat, GramSatellite spectral dataCrop classificationRF, KMRF: acc = 95.06%
[309]Various cropsSatellite spectral dataCrop classificationSVM, RF, CARTRF: overall acc = 97.85%, Kappa = 0.95
[310]Various cropsSatellite spectral data, in situ dataCrop classificationRFoverall acc = 75%, Kappa = 72%
[311]Maize, SoybeanSatellite spectral dataCrop classificationRF, MLP, LSTMLSTM: confidence interval = 95%
[312]Various cropsSatellite spectral and in situ dataCrop classificationXGBoost, SVM, RF, MLP, CNN, RNNCNN: overall acc = 96.65%
[313]RiceSatellite spectral dataCrop classificationCNN, SVM, RF, XGboost, MLPCNN: overall acc = 93.14%, F1 score = 0.8552
[314]Various cropsSatellite spectral and in situ dataCrop classificationRFOverall acc = 0.94, Kappa = 0.93
[315]Various cropsSatellite spectral dataCrop classificationCNN, LSTM, SVMCNN: overall acc = 95.44%, Kappa = 94.51%
[316]Various cropsSatellite spectral dataCrop classification prior to harvestingDT, KNN, RF, SVMRF: overall acc = 81.5%, Kappa = 0.75
[317]Various cropsSatellite spectral dataCrop classificationCNNOverall acc = 98.19%
[318]Various cropsSatellite spectral dataCrop classificationSVM, DA, DT, NNLNNL: F1 score = 0.88
[319]Banana, Rice, Sugarcane, CottonSatellite spectral and in situ dataCrop classificationSVMOverall acc = 89%
[320]Various cropsSatellite spectral and in situ dataCrop classificationRFOverall acc = 93.1%
Acc: Accuracy; IoU: Intersection over Union; RGB: Red-Green-Blue; UAV: Unmanned Aerial Vehicle.
Table A5. Crop Management: Crop Quality.
Table A5. Crop Management: Crop Quality.
RefCropInput DataFunctionalityModels/AlgorithmsBest Output
[64]ApplesQuality features, (flesh firmness, soluble solids, fruit mass and skin color)Classification of apple total quality: very poor, poor, medium, good and excellentFIS, ANFISFIS: acc values: (1) 2005: 83.54%; 2006: 92.73%; 2007: 96.36%
[321]PepperRGB images, quality features (color, mass and density of peppers)Recognition of pepper seed qualityBLR, MLPMLP: 15 traits, stability = 99.4%, predicted germination = 79.1%, predicted selection rate = 90.0%
[322]Soybeans Satellite spectral and soil dataEstimation of crop gross primary productivityRF, ANNANN: R2 = 0.92, RMSE = 1.38 gCdm−2
[323]WheatRGB images captured by UAVEstimation of aboveground nitrogen content combining various VI and WFsPLSR, PSO-SVRPSO-SVR: R2 = 0.9025, RMSE = 0.3287
[324]Millet, rye, maizeRGB images captured in laboratoryAssessment of grain crops seed qualityCNNFaster R-CNN: (1) Pearl millet: mAP = 94.3%; (2) rye: mAP = 94.2%, (3) Maize: mAP = 97.9%
[325]Jatropha curcasX-ray imagingPrediction of vigor and germinationLDAAcc values:
Fast germination: 82.08%;
Slow germination: 76.00%;
Non-germinated: 88.24%
[326]Various legumesSpectral data form spectroradiomenerEstimation of five warm-season legumes forage qualityPLS, SVM, GPSVM: All five crops: Acc = R c v 2 R v 2 = 0.92–0.99, IVTD: Acc = R c v 2 R v 2 = 0.42–0.98
[327]Forage grassX-ray imagingPrediction of vigor and seed germinationLDA, PLS-DA, RF, NB, SVMPLS-DA: Acc values:
(1) Vigor: FT-NIR: 0.61, X-ray: 0.68,
Combination: 0.58;
(2) Germination: FT-NIR: 0.82, X-ray: 0.86, Combination: 0.82
[328]TomatoRGB imagesDimensions and mass estimation for quality inspection(1) DSM, (2) Dimensions (CNN), (3) Mass estimation on: (a) MMD (BET, GPR, SVR, ANN, GPR), (b) EDG (BET, GPR, SVR, ANN)(1) DSM: precision = 99.7%; MAE values: (2) Width (2.38), Length (2.58); (3) Mass estimation: (a) MMD (4.71), (b) EDG (13.04)
[329]PeachHyperspectral imagesEstimation of soluble solids contentSAE-RFR2 = 0.9184, RMSE = 0.6693
Acc: Accuracy; DSM: Detection and Segmentation Module; EDG: Estimated Dimensions Geometry; IVTD: In Vitro True Digestibility; RGB; Red-Green-Blue; MMD: Manually Measured Dimensions; mAP: mean Average Precision; PSO: Particle Swarm Optimization; RGB; Red-Green-Blue; SAE: Stacked AutoEncoder; VI: Vegetation Indices; WF: Wavelet Features.
Table A6. Water management.
Table A6. Water management.
RefPropertyInput DataFunctionalityModels/AlgorithmsBest Output
[330]Crop water statusWeather data, crop water status, thermal imagesPrediction of vineyard’s water status. Scenario A: with RT; Scenario B: without RTREPTree(1) Scenario A: prediction: R2 = 0.58, RMSE = 0.204 MPa; (2) Scenario B: prediction: R2 = 0.65, RMSE = 0.184 MPa.
[331]Crop water statusCrop water status, hyperspectral dataDiscrimination of stressed and non-stressed vinesRF, XGBoostRF: Acc = 83.3%, Kappa = 0.67
[332]Groundwater levelWater table depth, weather dataPrediction of water table depthLSTM, FFNN,LSTM: R2 = 0.789–0.952
[333]Irrigation schedulingWeather, irrigation, soil moisture, yield dataPrediction of weekly irrigation plan in jojoba orchardsDTR, RFR, GBRT, MLR, BTC(1) Regression: GBRT: Acc = 93%; (2) Classification: GBRT: Acc = 95%
[334]Crop water statusWater status, multispectral UAV dataEstimation of vineyard water statusMLR, ANNANN: R2 = 0.83
[335]ETWeather dataEstimation of daily EToELM, WANNELM: RMSE values: Region case A: 0.1785 mm/day; Region case B: 0.359 mm/day
[336]ETWeather dataEstimation of daily EToRF, M5Tree, GBDT, XGBoost, SVM, RFXGBoost: RMSE = 0.185–0.817 mmday−1
[337]Soil water contentWeather data, volumetric soil moisture contentPrediction of one-day-ahead volumetric soil moisture contentFFNN, LSTMLSTM: R2 > 0.94
[338]InfiltrationField data, moisture content, cumulative infiltration of soilEstimation of cumulative infiltration of soilSVM, ANN, ANFISANFIS: RMSE = 0.8165 cm, CC = 0.9943
[339]Soil water contentWeather data, soil moisture difference, ultraviolet radiationPrediction of soil moistureSVRR = 0.98, R2 = 0.96, MSE = 0.10
[340]Soil water contentSimulated soil moisture data, weather dataForecasting of monthly soil moisture for: Scenario A: upper; Scenario B: lower layersELM(1) Scenario A: RRMSE = 19.16%;
(2) Scenario B: RRMSE = 18.99%
[341]ETWeather and in situ crop dataEstimation of actual ET
Scenario A: rainfed maize field under non-mulching; Scenario B: partial plastic film mulching
ANN, SVMANN: Scenario A: ET = 399.3 mm, RMSE = 0.469, MAE = 0.376;
Scenario B: ET = 361.2 mm, RMSE = 0.421, MAE = 0.322
[342]Infiltration and infiltration rateSoil and hydraulic dataPrediction of cumulative infiltration and infiltration rate in arid areasANFIS, SVM, RFSVM: RMSE values: cumulative infiltration: 0.2791 cm, infiltration rate: 0.0633 cmh−1
[343]Water qualityNIR spectroscopy.Estimation of water pollution levelCNNRMSE = 25.47 mgL−1
[344]ETWeather data, simulated ET dataEstimation of ETo: (1) 2011–2015; (2) 2016–2017LSTM(1) Predictions in 3 sites: R2 > 0.90; (2) All sites: RMSE = 0.38–0.58 mmday−1
[345]Soil water contentWeather data, potential ET, simulated soil moisture dataEstimation of soil moistureFFNN, Ross-IESFFNN: RMSE = 0.15–0.25, NSE = 0.71–0.91
[346]ETWeather data, simulated ET data, soil dataEstimation of daily kikuyu grass crop ETRT, SVR, MLP, KNN, LGR, MLR, BN, RFCRFC: R = 0.9936, RMSE = 0.183 mmday−1, MRE = 6.52%
[347]DroughtWeather dataEvaluation of farmers’ draught perception RF, DTMost influential parameters: farmer’s age, education level, years of experience and number of cultivated land plots
[348]ETWeather and soil data; simulated ETPrediction of daily potato ETANN,
AdaBoost, KNN
KNN: R2 = 0.8965, RMSE = 0.355 mm day−1, MSE = 0.126 mm day−1
[349]Soil water erosionIn situ data, geological, and weather dataSusceptibility mapping of soil erosion from waterRF, GP, NBRF: Acc = 0.91, kappa = 0.94, POD = 0.94
[350]ET, droughtWeather data, simulated ET indexPrediction of droughtSVRFuzzy-SVR: R2 = 0.903, RMSE = 0.137, MAE = 0.105
[351]ETWeather data, simulated EToEstimation of daily EToCNN, ANN, XGBoost, RFCNN: (1) Regional: R2 = 0.91, RMSE = 0.47; (2) Local: R2 = 0.92, RMSE = 0.37
[352]ETWeather dataEstimation of daily EToELM, ANN, RFELM: R2 = 0.920, MAE = 0.394 mmday−1
[353]ETWeather dataPrediction of ET in semi-arid and arid regionsCART, CCNN, SVMSVM: (1) Station I: R2 = 0.92; (1) Station II: R2 = 0.97
[354]Pan evaporationWeather dataPrediction of monthly pan evaporationELM, ANN, M5TreeELM: R2 = 0.864–0.924, RMSE = 0.3069–0.4212
[355]ETWeather data, simulated EToEvaluation of ML algorithms in daily reference ET predictionCubist, SVM, ANN, MLRCubist: R2 = 0.99, RMSE = 0.10 mmday−1, MAE = 0.07 mmday−1
[356]ETWeather data, simulated ETEstimation of EToSVM, MLP, CNN, GRNN, GMDHSVM: R = 0.96–1.00, ME = 95–99%
[357]DroughtWeather data, simulated Palmer Z-index valuesEstimation of Palmer drought severity indexANN, DT, LR, SVMANN: R = 0.98, MSE = 0.40, RMSE = 0.56
[358]Water qualityIn-situ water quality data, hyperspectral, satellite data.Estimation of inland water quality.LSTM, PLSR, SVR, DNNDNN: R2 = 0.81, MSE = 0.29, RMSE = 0.54
[359]GroundwaterIn-situ water quality data, hyperspectral, satellite spectral dataEstimation of water qualityDTAcc = 81.49%, ROC = 87.75%
[360]GroundwaterWeather data, ET, satellite spectral data, land useEstimation of groundwater withdrawalsRFR2 = 0.93, MAE = 4.31 mm, RMSE = 13.50 mm
[361]Groundwater nitrate concentrationVarious geo-environmental dataComparison of different ML models for estimating nitrate concentrationSVM, Cubist, RF, Bayesian-ANNRF: R2 = 0.89, RMSE = 4.24, NSE = 0.87
Acc: Accuracy; CC: Coefficient of Correlation; ET: Evapotranspiration; ETo: reference EvapoTranspiration; ROC: Receiver Operating Characteristic; ME: Model Efficiency; NSE: Nash-Sutcliffe model efficiency Coefficient; POD: Probability Of Detection.
Table A7. Soil management.
Table A7. Soil management.
RefPropertyInput DataFunctionalityModels/AlgorithmsBest Output
[362]Soil organic matterSoil properties, spectrometer NIR dataEstimation of soil organic matterELM, SVMTRI-ELM: R2 = 0.83, RPIQ = 3.49
[363]Soil microbial dynamicsMicrobial dynamics measurements from root samplesPrediction of microbial dynamics: (1) BP; (2) PS and (3) ACCAANN, SVR, FISSCFIS: (1) BP: RMSE = 1350000, R2 = 1.00; (2) PS: RMSE = 45.28, R2 = 1.00; (3) ACCA: RMSE = 271, R2 = 0.52
[364]Soil salinitySoil salinity, hyperspectral data, satellite dataPrediction of soil salinityBootstrap
BPNN
BPNN with hyperspectral data: R2 = 0.95, RMSE = 4.38 g/kg
[365]Soil propertiesSimulated topographic attributes, satellite dataPrediction of SOC, CCE, clay contentCu, RF, RT, MLR(1) CCE: Cu: R2 = 0.30, RMSE = 9.52; (2) SOC: Cu, RF: R2 = 0.55; (3) Clay contents: RF: R2 = 0.15, RMSE = 7.86
[366]Soil organic matterSoil properties, weather data, terrain, satellite spectral dataPrediction of soil organic matterDT, BDT, RF, GBRTGBRT: ME = 1.26 g/kg, RMSE = 5.41 g/kg, CCC = 0.72
[367]Soil organic mattersoil properties, satellite, land cover, topographic, weather dataPrediction of soil organic matterCNN, RF, XGBoostXGBoost: ME = 0.3663 g/kg, MSE = 1.0996 g/kg
[368]Electrical conductivitysoil properties, simulated electrical conductivityPrediction of soil electrical conductivityMLPMLP: WI = 0.780, ENS = 0.725,
ELM = 0.552
[369]Soil moisture contentHyperspectral images data, UAV, soil moisture content data samplesEstimation of soil moisture contentRF, ELMRF: R2 = 0.907,RMSEP = 1.477, RPD = 3.396
[370]Soil temperatureWeather dataEstimation of soil temperature at various depthsELM, GRNN, BPNN, RFELM: RMSE = 2.26–2.95 °C, MAE = 1.76–2.26 °C, NSE = 0.856–0.930, CC = 0.925–0.965
[371]SOCSoil properties, vis-NIR spectral dataEstimation of SOCRFR2 = 0.74–0.84,
RMSEP = 0.14–0.18%, RPD = 1.98–2.5
[372]Soil propertiesSoil properties, visible-NIR, MIR spectral dataPrediction of total carbon, cation exchange capacity and SOCPLSR, Cu, CNNCNN: R2 = 0.95–0.98
[373]Soil propertiesSoil properties, simulated organic, mineral samples, soil spectral data Estimation of various soil propertiesCNNRMSE values: OC: 28.83 g/kg, CEC: 8.68 cmol+/kg, Clay: 7.47%, Sand: 18.03%,
pH: 0.5 g/kg, N: 1.52 g/kg
[374]Soil moisture content, soil ETSoil properties, water, weather and crop dataEstimation of soil moisture content and soil ETNN-RBFSoil MC: RMSE = 0.428, RSE = 0.985, MSE = 0.183, RPD = 8.251
[375]Soil salinitySoil salinity, crop field temperatureEstimation of leaching water requirements for saline soilsNaive Bayes classifierAcc = 85%
[376]Soil erosionWeather data, satellite, soil chemical dataEstimation of soil erosion susceptibilityCombination of GWR-ANNGWR-ANN: AUC = 91.64%
[377]Soil fertilitySpectral, weather data, EC, soil propertiesPrediction of soil fertility and productivityPLS(1) Productivity: RMSEC = 0.20 T/ha, RMSECV = 0.54 T/ha, R2 = 0.9189;
(2) Organic matter: R2 = 0.9345, RMSECV = 0.54%; (3) Clay: R2 = 0.9239, RMSECV = 5.28%
[378]Soil moistureMultispectral images from UAV, in situ soil moisture, weather data.Retrieval of surface soil moistureBRT, RF, SVR, RVRBRT: MAE = 3.8%
[379]Soil moistureSoil samples, simulated PWP, field capacity dataEstimation of PWP and field capacityANN, KNN, DLR2 = 0.829, R = 0.911, MAE = 0.027
[380]Soil temperatureWeather dataEstimation of soil temperatureGMDH, ELM, ANN, CART, MLRELM: R = 0.99
[381]Soil moistureSoil samples, on-field thermal, simulated soil moisture dataEstimation of soil moisture contentANN, SVM, ANFISSVM: R = 0.849, RMSE = 0.0131
[382]Gully erosionGeological, environmental, geographical dataEvaluation of gully erosion susceptibility mappingRF, CDTree, BFTree, KLRRF: AUC = 0.893
[383]Groundwater salinityTopographic, groundwater salinity dataEvaluation of groundwater salinity susceptibility mapsStoGB, RotFor, BGLMBGLM: Kappa = 0.85
[384]Heavy metals transferSoil and crop propertiesIdentification of factors related to heavy metals transferRF, GBM, GLMRF: R2 = 0.17–0.84
[385]Land suitabilitySoil properties, weather, topography dataPrediction of land suitability mapsSVM, RFRF: Kappa = 0.77, overall acc = 0.79
[386]SOCSoil properties, satellite, simulated environmental dataPrediction of SOCMLR, SVM, Cu, RF, ANNRF: R2 = 0.68
[387]Electrical conductivity, SOCSoil properties, weather dataElectrical conductivity and SOC predictionGLM(1) EC: MSPE = 0.686, MAPE = 0.635; (2) OC: MSPE = 0.413, MAPE = 0.474
[388]SOC, soil moistureProximal spectral data, electrical conductivity, soil samples dataPrediction of SOC and soil moisture 3D mapsCu, RFCu: R2 = 0.76, CCC = 0.84, RMSE = 0.38%
[389]Soil aggregate stabilitySoil samples dataPrediction of soil aggregate stabilityGLM, ANNANN: R2 = 0.82
[390]SOCSoil samples, weather, topographic, satellite dataPrediction of SOCCu, RF, SVM, XGBoost, KNNBest SOC prediction: RF: RMSE = 0.35%, R2 = 0.6
[391]Soil moistureIn situ soil moisture, satellite dataEstimation of surface soil moistureSVM, RF, ANN, ENRF: NSE = 0.73
[392]SOCComposite surface soil, satellite, weather dataPrediction of SOCSVM, ANN, RT, RF, XGBoost, DNNDNN: MAE = 0.59%, RMSE = 0.75%, R2 = 0.65, CCC = 0.83
[393]Gully erosionTopographic, weather, soil dataMapping of gully erosion susceptibilityLMT, NBTree, ADTreeLMT: AUC = 0.944
[394]Gully erosionSatellite spectral dataIdentification of gully erosionLDA, SVM, RFBest overall acc: RF: 98.7%
[395]Gully erosionSatellite, weather, land type maps dataGully erosion mappingLGRAcc = 68%, Kappa = 0.42
ACCA: Aminoyclopropane-1-carboxylate; AUC: Area Under Curve; BP: Bacterial Population; CC: Coefficient of Correlation; CCC: Concordance Correlation Coefficient; CCE: Calcium Carbonate Equivalent; ET: EvaporoTransporation; MIR: Mid InfraRed; NSE: Nash-Sutcliffe model efficiency Coefficient; NIR: Near-InfraRed; PS: Phosphate Solubilization; PWP: Permanent Wilting Point; RPIQ: Ratio of Performance to Interquartile Range; RPD: Relative Percent Deviation; SOC: Soil Organic Carbon; WI: Willmott’s Index.
Table A8. Livestock Management: Animal Welfare.
Table A8. Livestock Management: Animal Welfare.
RefAnimalInput DataFunctionalityModels/AlgorithmsBest Output
[396]Swine3D, 2D video imagesDetection of pigs tail posture as a sign of tail bitingLMMLow vs. not low tails: Acc = 73.9%, Sensitivity = 88.4%, Specificity = 66.8%
[397]SheepAccelerometer and gyroscope attached to the ear and collar of sheepClassification of Grazing and Rumination Behavior in SheepRF, SVM, KNN, AdaboostRF: Highest overall acc: collar: 92%; ear: 91%
[398]SheepAccelerometer, gyroscope dataClassification of sheep behavior (lying, standing and walking)RFAcc = 95%, F1-score = 91–97% for: ear: 32 Hz, 7 s, collar: 32 Hz, 5 s
[399]SwineRGB imagesRecognition of pigs feeding behaviorCNNFaster R-CNN: Precision = 99.6%, recall = 86.93%
[400]SwineRGB images, depth imagesRecognition of lactating sow posturesCNNFaster R-CNN: Sow posture:
(1) Recumbency: night: 92.9%, daytime: 84.1%;
(2) Standing: at night: 0.4%, daytime: 10.5%
(3) Sitting: night: 0.55%, daytime: 3.4%
[401]Cattle, Sheep, sheepdogAudio field recordings dataClassification of animals’ vocalizationSVMAcc: cattle: 95.78%, sheep: 99.29%, dogs: 99.67%
[402]CattleAccelerometer dataDetection of sheep rumination.SVMAcc = 86.1%
[403]SheepEar-borne accelerometer data, observation recordingsClassification of grazed sheep behavior Scenario A: walking, standing, lying, grazing
Scenario B: active/inactive
Scenario C: body posture
CART, SVM, LDA, QDA(1) Scenario A: SVMAcc: 76.9%;
(2) Scenario B: CART
Acc: 98.1%;
(3) Scenario C:
Acc: LDA 90.6%
[404]GoatOn-farm videos, weather dataClassification of goats behavior
(1) Anomaly detection (2) Feeding/non-feeding
KNN, SVR, CNN(1) Most accurate: KNN: Acc = 95.02–96.5%; (2) Faster R-CNN: Eating: 55.91–61.33 %, Non-feeding (Resting): 79.91–81.53 %
[405]Cattle, sheep UAV Video dataCounting and classification of cattle, sheepCNNMask R-CNN: Cattle: Acc = 96%; Sheep: Acc = 92%
[406]CattleAccelerometer dataPrediction of dairy cows behavior at pastureXGBoost, SVM, AdaBoost, RFBest predictions for most behaviours: XGBoost: sensitivity = 0.78
[407]CattlePedometersDetection of early lameness in dairy cattleRF, KNNRF: acc = 91%
[408]CattleEnvironmental heat stressors dataEvaluation of heat stressors influence in dairy cows physiological responsesRF, GBM, ANN, PLRRF: (1) RR: RMSE = 9.695 respmin−1; (2) ST: RMSE = 0.334 °C
[409]CattleDiets nutrient levels dataPrediction of dairy cows diet energy digestionELM, LR, ANN, SVMBest performance: kernel-ELM: (1) DE: R2 = 08879, MAE = 4.0606; (2) ED: R2 = 0899, MAE = 2.3272
[410]CattleRoutine herd dataDetection of lameness in dairy herdsGLM, RF, GBM, XGBoost, CARTGBM: AUC = 0.75, Sensitivity = 0.58, Specificity = 0.83
[411]PoultryAir quality dataEarly prediction of Coccidiosis in poultry farmsKNNAUC = 0.897–0.967
[412]CattleOn-farm questionnaires, clinical and milk recordsPrediction of mastitis infection in dairy herdsRFCONT vs. ENV: Acc = 95%, PPV = 100%, NPV = 95%
[413]CattleLocation (transceiver) and accelerometer dataDetection of dairy cows in estrusKNN, LDA, CART, BPNN, KNNBPNN: specificity = 85.71%
[414]CattleMid-NIR spectral data using spectrometerPrediction of bovine tuberculosis in dairy cowsCNNAccuracy = 71%, sensitivity = 0.79, specificity = 0.65
[415]CattleMetabolomics data from serum samplesEvaluation of metabotypes existence in overconditioned dairy cowsRF, NB, SMO, ADTADT: acc = 84.2%
[416]CattleAccelerometer dataClassification of cows’ behavior GBDT, SVM, RF, KNNGBDT: acc = 86.3%, sensitivity = 80.6%
[417]SheepGyroscope and accelerometer ear sensorsDetection of lame and non-lame sheep in three activitiesRF, SVM, MLP, AdaBoostRF: overall acc = 80%
[418]CattleActivity and rumination dataPrediction of calving day in cattleRNN, RF, LDA, KNN, SVMRNN/LSTM: Sensitivity = 0.72, Specificity = 0.98
AUC: Area Under Curve; Cont: Contagious; DE: Digestible Energy; ED: Energy Digestibility; ENV: Environmental; DWT: Discrete Wavelet Transform; MFCCs: Mel-Frequency Cepstral Coefficients; NIR: Near InfraRed; NPV: Negative Predictive Value; PTZ: Pan-Tilt-Zoom; PPV: Positive Predictive Value; RGB: Red-Green-Blue; RR: Respiration Rate; ST: Skin Temperature.
Table A9. Livestock Management: Livestock Production.
Table A9. Livestock Management: Livestock Production.
RefAnimalInput DataFunctionalityModels/AlgorithmsBest Output
[419]CattleDepth images in situ BCS evaluation dataEstimation of BCS, Scenario A: HER = 0.25; Scenario B: HER = 0.5CNNScenario A: Acc = 78%; Scenario B: Acc = 94%
[420]SwineWeather, physiological dataPrediction of piglets temperature
Scenario A: skin-surface; Scenario B: hair-coat; Scenario C: core
DNN, GBR, RF, GLRBest prediction: Scenario C: DNN: error = 0.36%
[421]PoultryDepth, RGB images dataClassification of flock of chickens’ behaviorCNNAcc = 99.17%
[422]CattleAccelerometer, observations recordings dataClassification of cattle behaviour
Scenario A: grazing; Scenario B: standing; Scenario C: ruminating
RFHighest F-scores: RF: Scenario A: 0.914; Scenario B: 0.89; Scenario C: 0.932
[423]SheepPhenotypic, weather dataPrediction of on-farm water and electricity consumption on pasture based Irish dairy farmsBAG, ANN, MTScenario 3: MT: R = 0.95, MAE = 0.88 μm, RMSE = 1.19
[424]CattleMilk production, environmental dataPrediction of on-farm water and electricity consumption on pasture based Irish dairy farmsCART, RF, ANN, SVMElectricity consumption prediction: SVM: relative prediction error = 12%
[425]GoatRGB dataDetection of dairy goats from surveillance videoCNNFaster R-CNN: Acc = 92.49 %
[426]CattleAnimal feed, machinery, milk yield dataEstimation of energy use targets for buffalo farmsANN30.5 % of total energy input can be saved if targeted inputs are followed
[427]Cattle3D images dataPrediction of liveweight and carcass characteristicsANN, SLRANN: Liveweight: R2 = 0.7, RMSE = 42; CCW:
R2 = 0.88, RMSE = 14; SMY: R2 = 0.72, RMSE = 14
[428]SwineRGB imagesDetection and pig counting on farms CNNMAE = 1.67, RMSE = 2.13, detection speed = 42 ms per image
[429]SheepBiometric traits, body condition score dataPrediction of commercial meat cuts and carcass traitsMLR, ANN, SVR, BNSVM: Neck weight: R2 = 0.63, RMSE = 0.09 kg; HCW: R2 = 0.84, RMSE = 0.64
[430]CattleData produced by REIMSPrediction of beef attributes (muscle tenderness, production background, breed type and quality grade)SVM, RF, KNN, LDA, PDA, XGBoost, LogitBoost, PLS-DABest Acc: SVM: 99%
[431]SheepCarcass, live weight and environmental recordsEstimation of sheep carcass traits (IMF, HCW, CTLEAN, GRFAT, LW)DL, GBT, KNN, MT, RFHighest prediction of all traits: RF: (1) IMF: R = 0.56, MAE = 0.74; (2) HCW: R = 0.88, MAE = 1.19; (3) CTLEAN: R = 0.88, MAE = 0.76
[432]SwineADG, breed, MT, gender and BBFTIdentification of pigs’ limb conditionRF, KNN, ANN, SVM, NB, GLM, Boost, LDARF: Acc = 0.8846, Kappa = 0.7693
[433]CattleActivity, weather dataPrediction of cows protein and fat content, milk yield and actual concentrate feed intake, Scenario (1) only cows with similar heat tolerance; Scenario (2) all cowsANN (1) Scenario A: n = 116, 456; R = 0.87; slope = 0.76;
(2) Scenario B: n = 665, 836; R = 0.86; slope = 0.74
[434]CattleAnimal behavior, feed intake, estrus events dataDetection of estrus in dairy heifersGLM, ANN, RFRF: Acc = 76.3–96.5%
[435]CattleInfrared thermal imagesEstimation of deep body temperatureLRM, QRMHigher correlation: QRM: R2 = 0.922
[436]CattleLiveweight, biophysical measurements dataPrediction of Carcass traits and marbling score in beef cattleLR, MLP, MT, RF, SVMSVM: carcass weight: R = 0.945, MAE = 0.139; EMA: R = 0.676, MAE = 4.793; MS: R = 0.631, MAE = 1.11
ACFW: Adult Clean Fleece Weight; ADG: Average Daily Gain; AFD: Adult Fibre Diameter; AGFW: Adult Greasy Fleece Weight; ASL: Adult Staple Length; ASS: Adult Staple Strength; BBFT: Bacon/BackFat Thickness; BCS: Body Condition Score; CCW: Cold Carcass Weights; CTLEAN: Computed Tomography Lean Meat Yield; DBT: Deep Body Temperature; EMA: Eye Muscle Area; GWAS: Genome-Wide Association Studies; GRFAT: Greville Rule Fat Depth; HER: Human Error Range; IMF: IntraMuscular Fat; HCW: Hot Carcass Weight; LW: Loin Weight; MS: Marbling Score; MT: Muscle Thickness; REIMS: Rapid Evaporative Ionization Mass Spectrometry; RGB: Red-Green-Blue; SMY: Saleable Meat Yield.
Table A10. Abbreviations for machine learning models.
Table A10. Abbreviations for machine learning models.
AbbreviationModel
ANNArtificial Neural Network
BMBayesian Models
DLDeep Learning
DRDimensionality Reduction
DTDecision Trees
ELEnsemble Learning
IBMInstance Based Models
SVMSupport Vector Machine
Table A11. Abbreviations for machine learning algorithms.
Table A11. Abbreviations for machine learning algorithms.
AbbreviationModelModel
AdaBoostELAdaptive Boosting
ADTDTAlternating Decision Trees
ANFISANNAdaptive-Neuro Fuzzy Inference Systems
ARDBMAutomatic Relevance Determination
Bayesian-ANNANNBayesian Artificial Neural Network
BAGELBagging Algorithm
BDTDTBagging Decision Trees
BDLBM,ANNBayesian Deep Learning
BETELBagged Ensemble Tree
BGLMBM, RegressionBayesian Generalized Linear Model
BLRRegressionBinary Logistic Regression
BNBMBayesian Network
BPNNANNBack-Propagation Neural Networks
BRTDT,ELBoosted Regression Trees
BTCELBoosted Trees Classifiers
CARTDTClassification And Regression Trees
CCNNANNCascade Correlation Neural Networks
CDTreeDTCredal Decision Trees
CNNANNConvolutional Neural Networks
CuRegressionCubist
DBNANNDeep Belief Networks
DFEL,SVMDecision Fusion
DLSRegressionDamped Least Squares
DNNANNDeep Neural Networks
DTRDT, RegressionDecision Tree Regression
EBTDT,ELEnsemble Bagged Trees
ERTDTExtremely Randomized Trees
ELMANNExtreme Learning Machines
ENRegressionElastic Net
FCNANNFully Convolutional Networks
FISANNFuzzy Inference System
FFNNANNFeed Forward Neural Networks
GBMELGradient Boosting Model
GBTDTGradient Tree Boosting
GBRRegressionGradient Boosted Regression
GBRTDT, RegressionGradient Boosted Regression Trees
GBDTDT,ELGradient Boosted Decision Trees
GLMRegressionGeneral Linear Model
GMDHDRGroup Method of Data Handling
GNBBMGaussian Naive Bayes
GPΒΜGaussian Processes
GPRΒΜGaussian Process Regression
GRNNANNGeneralized Regression Neural Networks
GWRRegressionGeographically Weighted Regression
KMIBMK-Means
KNNIBMK-Nearest Neighbors
LASSORegressionLeast Absolute Shrinkage and Selection Operator
LDADRLinear Discriminant Analysis
LightGBMELLight Gradient Boosting Machine
LMTRegression, DTLogistic Model Trees
LGRRegressionLoGistic Regression
LMMRegressionLinear Mixed Model
LRRegressionLinear Regression
LSTMANNLong-Short Term Memory
LogitBoostELLogistic Boosting
M5TreeDTM5 model Trees
MANNANNModular Artificial Neural Networks
MARSRegressionMultivariate Adaptive Regression Splines
MCSELMultiple Classifier System
MKLDRMultiple Kernel Learning
MLPANNMulti-Layer Perceptron
MLRRegressionMultiple Linear Regression
MTDTModel Trees
NBBMNaïve Bayes
NBTreeBM, DTNaïve Bayes Trees
NNLIBMNearest Neighbor Learner
OLSRegressionOrdinary Least Squares
PLSRRegressionPartial Least Squares Regression
PLS-DARegression, DRPartial Least Squares Discriminant Analysis
QCRegressionQuadratic Classifier
QDADRQuadratic Discriminant Analysis
QRMRegressionQuadratic Regression Model
RBFNANNRadial Basis Function Networks
REPTreeDTReduced Error Pruning Tree
RFCELRandomizable Filtered Classifier
RFREL, RegressionRandom Forest Regression
RNNANNRecurrent Neural Network
RQLRegressionRegression Quantile LASSO
RFELRandom Forest
Ross-IESELRoss Iterative Ensemble Smoother
RotForELRotation Forest
RVMRRegressionRelevance Vector Machine Regression
SCFISANNSubtractive Clustering Fuzzy Inference System
STDADRStepwise Discriminant Analysis
SMOSVMSequential Minimal Optimization
SMLRRegressionStepwise Multiple Linear Regression
SOMDRSelf-Organising Maps
StoGBELStochastic Gradient Boosting
SVRSVMSupport Vector Regression
TS-FNNANNTakagi-Sugeno Fuzzy Neural Networks
XGBoostELExtreme Gradient Boosting
WANNANNWavelet Artificial Neural Networks
WELELWeighted Ensemble Learning
WNNIBMWeighted Nearest Neighbors
WSLELWeakly Supervised Learning

References

  1. Thayer, A.; Vargas, A.; Castellanos, A.; Lafon, C.; McCarl, B.; Roelke, D.; Winemiller, K.; Lacher, T. Integrating Agriculture and Ecosystems to Find Suitable Adaptations to Climate Change. Climate 2020, 8, 10. [Google Scholar] [CrossRef] [Green Version]
  2. Nassani, A.A.; Awan, U.; Zaman, K.; Hyder, S.; Aldakhil, A.M.; Abro, M.M.Q. Management of natural resources and material pricing: Global evidence. Resour. Policy 2019, 64, 101500. [Google Scholar] [CrossRef]
  3. Conrad, Z.; Niles, M.T.; Neher, D.A.; Roy, E.D.; Tichenor, N.E.; Jahns, L. Relationship between food waste, diet quality, and environmental sustainability. PLoS ONE 2018, 13. [Google Scholar] [CrossRef]
  4. Benos, L.; Bechar, A.; Bochtis, D. Safety and ergonomics in human-robot interactive agricultural operations. Biosyst. Eng. 2020, 200, 55–72. [Google Scholar] [CrossRef]
  5. Lampridi, M.; Sørensen, C.; Bochtis, D. Agricultural Sustainability: A Review of Concepts and Methods. Sustainability 2019, 11, 5120. [Google Scholar] [CrossRef] [Green Version]
  6. Zecca, F. The Use of Internet of Things for the Sustainability of the Agricultural Sector: The Case of Climate Smart Agriculture. Int. J. Civ. Eng. Technol. 2019, 10, 494–501. [Google Scholar]
  7. Sørensen, C.A.G.; Kateris, D.; Bochtis, D. ICT Innovations and Smart Farming. In Communications in Computer and Information Science; Springer: Berlin/Heidelberg, Germany, 2019; Volume 953, pp. 1–19. [Google Scholar]
  8. Sonka, S. Big Data: Fueling the Next Evolution of Agricultural Innovation. J. Innov. Manag. 2016, 4, 114–136. [Google Scholar] [CrossRef]
  9. Meng, T.; Jing, X.; Yan, Z.; Pedrycz, W. A survey on machine learning for data fusion. Inf. Fusion 2020, 57, 115–129. [Google Scholar] [CrossRef]
  10. Evstatiev, B.I.; Gabrovska-Evstatieva, K.G. A review on the methods for big data analysis in agriculture. In Proceedings of the IOP Conference Series: Materials Science and Engineering, Borovets, Bulgaria, 26–29 November 2020; IOP Publishing Ltd.: Bristol, UK, 2020; Volume 1032, p. 012053. [Google Scholar]
  11. Helm, J.M.; Swiergosz, A.M.; Haeberle, H.S.; Karnuta, J.M.; Schaffer, J.L.; Krebs, V.E.; Spitzer, A.I.; Ramkumar, P.N. Machine Learning and Artificial Intelligence: Definitions, Applications, and Future Directions. Curr. Rev. Musculoskelet. Med. 2020, 13, 69–76. [Google Scholar] [CrossRef]
  12. Liakos, K.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine Learning in Agriculture: A Review. Sensors 2018, 18, 2674. [Google Scholar] [CrossRef] [Green Version]
  13. Abade, A.; Ferreira, P.; Vidal, F. Plant Diseases recognition on images using Convolutional Neural Networks: A Systematic Review. arXiv 2020, arXiv:2009.04365. [Google Scholar]
  14. Yashodha, G.; Shalini, D. An integrated approach for predicting and broadcasting tea leaf disease at early stage using IoT with machine learning—A review. Mater. Today Proc. 2020. [Google Scholar] [CrossRef]
  15. Yuan, Y.; Chen, L.; Wu, H.; Li, L. Advanced agricultural disease image recognition technologies: A review. Inf. Process. Agric. 2021. [Google Scholar] [CrossRef]
  16. Mayuri, K. Role of Image Processing and Machine Learning Techniques in Disease Recognition, Diagnosis and Yield Prediction of Crops: A Review. Int. J. Adv. Res. Comput. Sci. 2018, 9, 788–795. [Google Scholar]
  17. Wang, A.; Zhang, W.; Wei, X. A review on weed detection using ground-based machine vision and image processing techniques. Comput. Electron. Agric. 2019, 158, 226–240. [Google Scholar] [CrossRef]
  18. Su, W.-H. Advanced Machine Learning in Point Spectroscopy, RGB- and Hyperspectral-Imaging for Automatic Discriminations of Crops and Weeds: A Review. Smart Cities 2020, 3, 767–792. [Google Scholar] [CrossRef]
  19. Chlingaryan, A.; Sukkarieh, S.; Whelan, B. Machine learning approaches for crop yield prediction and nitrogen status estimation in precision agriculture: A review. Comput. Electron. Agric. 2018, 151, 61–69. [Google Scholar] [CrossRef]
  20. Van Klompenburg, T.; Kassahun, A.; Catal, C. Crop yield prediction using machine learning: A systematic literature review. Comput. Electron. Agric. 2020, 177, 105709. [Google Scholar] [CrossRef]
  21. Pushpanathan, K.; Hanafi, M.; Mashohor, S.; Fazlil Ilahi, W.F. Machine learning in medicinal plants recognition: A review. Artif. Intell. Rev. 2021, 54, 305–327. [Google Scholar] [CrossRef]
  22. Wäldchen, J.; Rzanny, M.; Seeland, M.; Mäder, P. Automated plant species identification—Trends and future directions. PLoS Comput. Biol. 2018, 14, e1005993. [Google Scholar] [CrossRef] [Green Version]
  23. Virnodkar, S.S.; Pachghare, V.K.; Patil, V.C.; Jha, S.K. Remote sensing and machine learning for crop water stress determination in various crops: A critical review. Precis. Agric. 2020, 21, 1121–1155. [Google Scholar] [CrossRef]
  24. Sun, A.Y.; Scanlon, B.R. How can Big Data and machine learning benefit environment and water management: A survey of methods, applications, and future directions. Environ. Res. Lett. 2019, 14, 73001. [Google Scholar] [CrossRef]
  25. Li, N.; Ren, Z.; Li, D.; Zeng, L. Review: Automated techniques for monitoring the behaviour and welfare of broilers and laying hens: Towards the goal of precision livestock farming. Animal 2020, 14, 617–625. [Google Scholar] [CrossRef] [Green Version]
  26. García, R.; Aguilar, J.; Toro, M.; Pinto, A.; Rodríguez, P. A systematic literature review on the use of machine learning in precision livestock farming. Comput. Electron. Agric. 2020, 179, 105826. [Google Scholar] [CrossRef]
  27. Ellis, J.L.; Jacobs, M.; Dijkstra, J.; van Laar, H.; Cant, J.P.; Tulpan, D.; Ferguson, N. Review: Synergy between mechanistic modelling and data-driven models for modern animal production systems in the era of big data. Animal 2020, 14, s223–s237. [Google Scholar] [CrossRef] [Green Version]
  28. Lovarelli, D.; Bacenetti, J.; Guarino, M. A review on dairy cattle farming: Is precision livestock farming the compromise for an environmental, economic and social sustainable production? J. Clean. Prod. 2020, 262, 121409. [Google Scholar] [CrossRef]
  29. Patrício, D.I.; Rieder, R. Computer vision and artificial intelligence in precision agriculture for grain crops: A systematic review. Comput. Electron. Agric. 2018, 153, 69–81. [Google Scholar] [CrossRef] [Green Version]
  30. Cravero, A.; Sepúlveda, S. Use and Adaptations of Machine Learning in Big Data—Applications in Real Cases in Agriculture. Electronics 2021, 10, 552. [Google Scholar] [CrossRef]
  31. Ang, K.L.-M.; Seng, J.K.P. Big Data and Machine Learning with Hyperspectral Information in Agriculture. IEEE Access 2021, 9, 36699–36718. [Google Scholar] [CrossRef]
  32. Jose, A.; Nandagopalan, S.; Venkata, C.M.; Akana, S. Artificial Intelligence Techniques for Agriculture Revolution: A Survey. Ann. Rom. Soc. Cell Biol. 2021, 25, 2580–2597. [Google Scholar]
  33. Jung, J.; Maeda, M.; Chang, A.; Bhandari, M.; Ashapure, A.; Landivar-Bowles, J. The potential of remote sensing and artificial intelligence as tools to improve the resilience of agriculture production systems. Curr. Opin. Biotechnol. 2021, 70, 15–22. [Google Scholar] [CrossRef]
  34. Vieira, S.; Lopez Pinaya, W.H.; Mechelli, A. Introduction to Machine Learning; Mechelli, A., Vieira, S.B.T.-M.L., Eds.; Academic Press: Cambridge, MA, USA, 2020; Chapter 1; pp. 1–20. ISBN 978-0-12-815739-8. [Google Scholar]
  35. Domingos, P. A few useful things to know about machine learning. Commun. ACM 2012, 55, 78–87. [Google Scholar] [CrossRef] [Green Version]
  36. Lopez-Arevalo, I.; Aldana-Bobadilla, E.; Molina-Villegas, A.; Galeana-Zapién, H.; Muñiz-Sanchez, V.; Gausin-Valle, S. A Memory-Efficient Encoding Method for Processing Mixed-Type Data on Machine Learning. Entropy 2020, 22, 1391. [Google Scholar] [CrossRef]
  37. Anagnostis, A.; Papageorgiou, E.; Bochtis, D. Application of Artificial Neural Networks for Natural Gas Consumption Forecasting. Sustainability 2020, 12, 6409. [Google Scholar] [CrossRef]
  38. Zheng, A.; Casari, A. Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2018. [Google Scholar]
  39. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  40. Kokkotis, C.; Moustakidis, S.; Papageorgiou, E.; Giakas, G.; Tsaopoulos, D.E. Machine learning in knee osteoarthritis: A review. Osteoarthr. Cartil. Open 2020, 2, 100069. [Google Scholar] [CrossRef]
  41. Simeone, O. A Very Brief Introduction to Machine Learning With Applications to Communication Systems. IEEE Trans. Cogn. Commun. Netw. 2018, 4, 648–664. [Google Scholar] [CrossRef] [Green Version]
  42. Choi, R.Y.; Coyner, A.S.; Kalpathy-Cramer, J.; Chiang, M.F.; Peter Campbell, J. Introduction to machine learning, neural networks, and deep learning. Transl. Vis. Sci. Technol. 2020, 9, 14. [Google Scholar] [CrossRef] [PubMed]
  43. Cheng, Q.; Zhang, S.; Bo, S.; Chen, D.; Zhang, H. Augmented Reality Dynamic Image Recognition Technology Based on Deep Learning Algorithm. IEEE Access 2020, 8, 137370–137384. [Google Scholar] [CrossRef]
  44. Anvarjon, T.; Mustaqeem; Kwon, S. Deep-Net: A Lightweight CNN-Based Speech Emotion Recognition System Using Deep Frequency Features. Sensors 2020, 20, 5212. [Google Scholar] [CrossRef]
  45. Fujiyoshi, H.; Hirakawa, T.; Yamashita, T. Deep learning-based image recognition for autonomous driving. IATSS Res. 2019, 43, 244–252. [Google Scholar] [CrossRef]
  46. Rai, A.K.; Dwivedi, R.K. Fraud Detection in Credit Card Data using Unsupervised Machine Learning Based Scheme. In Proceedings of the International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 2–4 July 2020; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2020; pp. 421–426. [Google Scholar]
  47. Carta, S.; Ferreira, A.; Podda, A.S.; Reforgiato Recupero, D.; Sanna, A. Multi-DQN: An ensemble of Deep Q-learning agents for stock market forecasting. Expert Syst. Appl. 2021, 164, 113820. [Google Scholar] [CrossRef]
  48. Sofos, F.; Karakasidis, T.E. Machine Learning Techniques for Fluid Flows at the Nanoscale. Fluids 2021, 6, 96. [Google Scholar] [CrossRef]
  49. Gangavarapu, T.; Jaidhar, C.D.; Chanduka, B. Applicability of machine learning in spam and phishing email filtering: Review and approaches. Artif. Intell. Rev. 2020, 53, 5019–5081. [Google Scholar] [CrossRef]
  50. Lučin, I.; Grbčić, L.; Čarija, Z.; Kranjčević, L. Machine-Learning Classification of a Number of Contaminant Sources in an Urban Water Network. Sensors 2021, 21, 245. [Google Scholar] [CrossRef]
  51. Anagnostis, A.; Benos, L.; Tsaopoulos, D.; Tagarakis, A.; Tsolakis, N.; Bochtis, D. Human activity recognition through recurrent neural networks for human-robot interaction in agriculture. Sensors 2021, 11, 2188. [Google Scholar] [CrossRef]
  52. Yvoz, S.; Petit, S.; Biju-Duval, L.; Cordeau, S. A framework to type crop management strategies within a production situation to improve the comprehension of weed communities. Eur. J. Agron. 2020, 115, 126009. [Google Scholar] [CrossRef]
  53. Khaki, S.; Wang, L. Crop Yield Prediction Using Deep Neural Networks. Front. Plant Sci. 2019, 10, 621. [Google Scholar] [CrossRef] [Green Version]
  54. Harvey, C.A.; Rakotobe, Z.L.; Rao, N.S.; Dave, R.; Razafimahatratra, H.; Rabarijohn, R.H.; Rajaofara, H.; MacKinnon, J.L. Extreme vulnerability of smallholder farmers to agricultural risks and climate change in Madagascar. Philos. Trans. R. Soc. B Biol. Sci. 2014, 369. [Google Scholar] [CrossRef] [Green Version]
  55. Jim Isleib Signs and Symptoms of Plant Disease: Is It Fungal, Viral or Bacterial? Available online: https://www.canr.msu.edu/news/signs_and_symptoms_of_plant_disease_is_it_fungal_viral_or_bacterial (accessed on 19 March 2021).
  56. Zhang, J.; Rao, Y.; Man, C.; Jiang, Z.; Li, S. Identification of cucumber leaf diseases using deep learning and small sample size for agricultural Internet of Things. Int. J. Distrib. Sens. Netw. 2021, 17, 1–13. [Google Scholar] [CrossRef]
  57. Anagnostis, A.; Tagarakis, A.C.; Asiminari, G.; Papageorgiou, E.; Kateris, D.; Moshou, D.; Bochtis, D. A deep learning approach for anthracnose infected trees classification in walnut orchards. Comput. Electron. Agric. 2021, 182, 105998. [Google Scholar] [CrossRef]
  58. Islam, N.; Rashid, M.M.; Wibowo, S.; Xu, C.-Y.; Morshed, A.; Wasimi, S.A.; Moore, S.; Rahman, S.M. Early Weed Detection Using Image Processing and Machine Learning Techniques in an Australian Chilli Farm. Agriculture 2021, 11, 387. [Google Scholar] [CrossRef]
  59. Slaughter, D.C.; Giles, D.K.; Downey, D. Autonomous robotic weed control systems: A review. Comput. Electron. Agric. 2008, 61, 63–78. [Google Scholar] [CrossRef]
  60. Zhang, L.; Li, R.; Li, Z.; Meng, Y.; Liang, J.; Fu, L.; Jin, X.; Li, S. A Quadratic Traversal Algorithm of Shortest Weeding Path Planning for Agricultural Mobile Robots in Cornfield. J. Robot. 2021, 2021, 6633139. [Google Scholar] [CrossRef]
  61. Bonnet, P.; Joly, A.; Goëau, H.; Champ, J.; Vignau, C.; Molino, J.-F.; Barthélémy, D.; Boujemaa, N. Plant identification: Man vs. machine. Multimed. Tools Appl. 2016, 75, 1647–1665. [Google Scholar] [CrossRef] [Green Version]
  62. Seeland, M.; Rzanny, M.; Alaqraa, N.; Wäldchen, J.; Mäder, P. Plant species classification using flower images—A comparative study of local feature representations. PLoS ONE 2017, 12, e0170629. [Google Scholar] [CrossRef]
  63. Zhang, S.; Huang, W.; Huang, Y.; Zhang, C. Plant species recognition methods using leaf image: Overview. Neurocomputing 2020, 408, 246–272. [Google Scholar] [CrossRef]
  64. Papageorgiou, E.I.; Aggelopoulou, K.; Gemtos, T.A.; Nanos, G.D. Development and Evaluation of a Fuzzy Inference System and a Neuro-Fuzzy Inference System for Grading Apple Quality. Appl. Artif. Intell. 2018, 32, 253–280. [Google Scholar] [CrossRef]
  65. Neupane, J.; Guo, W. Agronomic Basis and Strategies for Precision Water Management: A Review. Agronomy 2019, 9, 87. [Google Scholar] [CrossRef] [Green Version]
  66. El Bilali, A.; Taleb, A.; Brouziyne, Y. Groundwater quality forecasting using machine learning algorithms for irrigation purposes. Agric. Water Manag. 2021, 245, 106625. [Google Scholar] [CrossRef]
  67. Lu, Y.-C.; Sadler, E.J.; Camp, C.R. Economic Feasibility Study of Variable Irrigation of Corn Production in Southeast Coastal Plain. J. Sustain. Agric. 2005, 26, 69–81. [Google Scholar] [CrossRef]
  68. Mauget, S.A.; Adhikari, P.; Leiker, G.; Baumhardt, R.L.; Thorp, K.R.; Ale, S. Modeling the effects of management and elevation on West Texas dryland cotton production. Agric. For. Meteorol. 2017, 247, 385–398. [Google Scholar] [CrossRef]
  69. Chasek, P.; Safriel, U.; Shikongo, S.; Fuhrman, V.F. Operationalizing Zero Net Land Degradation: The next stage in international efforts to combat desertification? J. Arid Environ. 2015, 112, 5–13. [Google Scholar] [CrossRef]
  70. Fournel, S.; Rousseau, A.N.; Laberge, B. Rethinking environment control strategy of confined animal housing systems through precision livestock farming. Biosyst. Eng. 2017, 155, 96–123. [Google Scholar] [CrossRef]
  71. Salina, A.B.; Hassan, L.; Saharee, A.A.; Jajere, S.M.; Stevenson, M.A.; Ghazali, K. Assessment of knowledge, attitude, and practice on livestock traceability among cattle farmers and cattle traders in peninsular Malaysia and its impact on disease control. Trop. Anim. Health Prod. 2020, 53, 15. [Google Scholar] [CrossRef]
  72. Akhigbe, B.I.; Munir, K.; Akinade, O.; Akanbi, L.; Oyedele, L.O. IoT Technologies for Livestock Management: A Review of Present Status, Opportunities, and Future Trends. Big Data Cogn. Comput. 2021, 5, 10. [Google Scholar] [CrossRef]
  73. Wathes, C.M.; Kristensen, H.H.; Aerts, J.-M.; Berckmans, D. Is precision livestock farming an engineer’s daydream or nightmare, an animal’s friend or foe, and a farmer’s panacea or pitfall? Comput. Electron. Agric. 2008, 64, 2–10. [Google Scholar] [CrossRef]
  74. Berckmans, D.; Guarino, M. From the Editors: Precision livestock farming for the global livestock sector. Anim. Front. 2017, 7, 4–5. [Google Scholar] [CrossRef] [Green Version]
  75. PRISMA. Available online: http://prisma-statement.org/prismastatement/flowdiagram.aspx (accessed on 1 February 2021).
  76. Labarrière, F.; Thomas, E.; Calistri, L.; Optasanu, V.; Gueugnon, M.; Ornetti, P.; Laroche, D. Machine Learning Approaches for Activity Recognition and/or Activity Prediction in Locomotion Assistive Devices—A Systematic Review. Sensors 2020, 20, 6345. [Google Scholar] [CrossRef]
  77. Benos, L.; Stanev, D.; Spyrou, L.; Moustakas, K.; Tsaopoulos, D.E. A Review on Finite Element Modeling and Simulation of the Anterior Cruciate Ligament Reconstruction. Front. Bioeng. Biotechnol. 2020, 8. [Google Scholar] [CrossRef]
  78. Mostafa, S.S.; Mendonça, F.; Ravelo-García, A.G.; Morgado-Dias, F. A Systematic Review of Detecting Sleep Apnea Using Deep Learning. Sensors 2019, 19, 4934. [Google Scholar] [CrossRef] [Green Version]
  79. Anagnostis, A.; Asiminari, G.; Papageorgiou, E.; Bochtis, D. A Convolutional Neural Networks Based Method for Anthracnose Infected Walnut Tree Leaves Identification. Appl. Sci. 2020, 10, 469. [Google Scholar] [CrossRef] [Green Version]
  80. De Myttenaere, A.; Golden, B.; Le Grand, B.; Rossi, F. Mean Absolute Percentage Error for regression models. Neurocomputing 2016, 192, 38–48. [Google Scholar] [CrossRef] [Green Version]
  81. Lehmann, E.L.; Casella, N. Theory of Point Estimation, 2nd ed.; Springer: New York, NY, USA, 1998; ISBN 978-0-387-98502-2. [Google Scholar]
  82. Hosseini, S.; Ivanov, D.; Dolgui, A. Review of quantitative methods for supply chain resilience analysis. Transp. Res. Part E Logist. Transp. Rev. 2019, 125, 285–307. [Google Scholar] [CrossRef]
  83. Benos, L.; Tsaopoulos, D.; Bochtis, D. A review on ergonomics in agriculture. part I: Manual operations. Appl. Sci. 2020, 10, 1905. [Google Scholar] [CrossRef] [Green Version]
  84. Benos, L.; Tsaopoulos, D.; Bochtis, D. A Review on Ergonomics in Agriculture. Part II: Mechanized Operations. Appl. Sci. 2020, 10, 3484. [Google Scholar] [CrossRef]
  85. Chen, Y.-Y.; Lin, Y.-H.; Kung, C.-C.; Chung, M.-H.; Yen, I.-H. Design and Implementation of Cloud Analytics-Assisted Smart Power Meters Considering Advanced Artificial Intelligence as Edge Analytics in Demand-Side Management for Smart Homes. Sensors 2019, 19, 2047. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  86. Sadiq, R.; Rodriguez, M.J.; Mian, H.R. Empirical Models to Predict Disinfection By-Products (DBPs) in Drinking Water: An Updated Review, 2nd ed.; Nriagu, J.B.T.-E., Ed.; Elsevier: Oxford, UK, 2019; pp. 324–338. ISBN 978-0-444-63952-3. [Google Scholar]
  87. De Oliveira, M.A.; Monteiro, A.V.; Vieira Filho, J. A New Structural Health Monitoring Strategy Based on PZT Sensors and Convolutional Neural Network. Sensors 2018, 18, 2955. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  88. Yang, B.; Ma, J.; Yao, X.; Cao, W.; Zhu, Y. Estimation of Leaf Nitrogen Content in Wheat Based on Fusion of Spectral Features and Deep Features from Near Infrared Hyperspectral Imagery. Sensors 2021, 21, 613. [Google Scholar] [CrossRef] [PubMed]
  89. Sagi, O.; Rokach, L. Ensemble learning: A survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1249. [Google Scholar] [CrossRef]
  90. Pisner, D.A.; Schnyer, D.M. Support Vector Machine; Mechelli, A., Vieira, S.B.T.-M.L., Eds.; Academic Press: Cambridge, MA, USA, 2020; Chapter 6; pp. 101–121. ISBN 978-0-12-815739-8. [Google Scholar]
  91. Verhaeghe, H.; Nijssen, S.; Pesant, G.; Quimper, C.-G.; Schaus, P. Learning optimal decision trees using constraint programming. Constraints 2020, 25, 226–250. [Google Scholar] [CrossRef]
  92. Khosravi, P.; Vergari, A.; Choi, Y.; Liang, Y.; Van den Broeck, G. Handling Missing Data in Decision Trees: A Probabilistic Approach. arXiv 2020, arXiv:2006.16341. [Google Scholar]
  93. FAO. Bread Wheat—Improvement and Production. Available online: http://www.fao.org/3/y4011e/y4011e00.htm (accessed on 24 February 2021).
  94. UN. Food and Agriculture Organization Corporate Statistical Database (FAOSTAT) Crops/Regions/World List/Production Quantity (Pick Lists), Rice (Paddy). Available online: http://www.fao.org/faostat/en/#data/QC (accessed on 24 February 2021).
  95. Badole, S.L.; Patil, K.Y.; Rangari, V.D. Antihyperglycemic Activity of Bioactive Compounds from Soybeans; Watson, R.R., Dokken, B.B.B.T.-G.I., Eds.; Academic Press: Boston, MA, USA, 2015; Chapter 18; pp. 225–227. ISBN 978-0-12-800093-9. [Google Scholar]
  96. Moshou, D.; Bravo, C.; Oberti, R.; West, J.S.; Ramon, H.; Vougioukas, S.; Bochtis, D. Intelligent multi-sensor system for the detection and treatment of fungal diseases in arable crops. Biosyst. Eng. 2011, 108, 311–321. [Google Scholar] [CrossRef]
  97. Abdullah, S.S.; Malek, M.A.; Abdullah, N.S.; Kisi, O.; Yap, K.S. Extreme Learning Machines: A new approach for prediction of reference evapotranspiration. J. Hydrol. 2015, 527, 184–195. [Google Scholar] [CrossRef]
  98. Voroney, P. Soils for Horse Pasture Management; Sharpe, P.B.T.-H.P.M., Ed.; Academic Press: Cambridge, MA, USA, 2019; Chapter 4; pp. 65–79. ISBN 978-0-12-812919-7. [Google Scholar]
  99. Gonzalez-Rivas, P.A.; Chauhan, S.S.; Ha, M.; Fegan, N.; Dunshea, F.R.; Warner, R.D. Effects of heat stress on animal physiology, metabolism, and meat quality: A review. Meat Sci. 2020, 162, 108025. [Google Scholar] [CrossRef]
  100. Pomar, C.; Marcoux, M.; Gispert, M.; Font i Furnols, M.; Daumas, G. 21—Determining the lean content of pork carcasses. In Woodhead Publishing Series in Food Science, Technology and Nutrition; Kerry, J.P., Ledward, D.B.T.-I., Eds.; Woodhead Publishing: Cambridge, UK, 2009; pp. 493–518. ISBN 978-1-84569-343-5. [Google Scholar]
  101. Chandra, M.A.; Bedi, S.S. Survey on SVM and their application in image classification. Int. J. Inf. Technol. 2018. [Google Scholar] [CrossRef]
  102. Emilien, A.-V.; Thomas, C.; Thomas, H. UAV & satellite synergies for optical remote sensing applications: A literature review. Sci. Remote Sens. 2021, 3, 100019. [Google Scholar] [CrossRef]
  103. Kouadio, L.; Deo, R.C.; Byrareddy, V.; Adamowski, J.F.; Mushtaq, S.; Phuong Nguyen, V. Artificial intelligence approach for the prediction of Robusta coffee yield using soil fertility properties. Comput. Electron. Agric. 2018, 155, 324–338. [Google Scholar] [CrossRef]
  104. Aghighi, H.; Azadbakht, M.; Ashourloo, D.; Shahrabi, H.S.; Radiom, S. Machine Learning Regression Techniques for the Silage Maize Yield Prediction Using Time-Series Images of Landsat 8 OLI. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4563–4577. [Google Scholar] [CrossRef]
  105. Khanal, S.; Fulton, J.; Klopfenstein, A.; Douridas, N.; Shearer, S. Integration of high resolution remotely sensed data and machine learning techniques for spatial prediction of soil properties and corn yield. Comput. Electron. Agric. 2018, 153, 213–225. [Google Scholar] [CrossRef]
  106. Haghverdi, A.; Washington-Allen, R.A.; Leib, B.G. Prediction of cotton lint yield from phenology of crop indices using artificial neural networks. Comput. Electron. Agric. 2018, 152, 186–197. [Google Scholar] [CrossRef]
  107. Linker, R. Machine learning based analysis of night-time images for yield prediction in apple orchard. Biosyst. Eng. 2018, 167, 114–125. [Google Scholar] [CrossRef]
  108. Ahmad, I.; Saeed, U.; Fahad, M.; Ullah, A.; Habib ur Rahman, M.; Ahmad, A.; Judge, J. Yield Forecasting of Spring Maize Using Remote Sensing and Crop Modeling in Faisalabad-Punjab Pakistan. J. Indian Soc. Remote Sens. 2018, 46, 1701–1711. [Google Scholar] [CrossRef]
  109. Sayago, S.; Bocco, M. Crop yield estimation using satellite images: Comparison of linear and non-linear models. AgriScientia 2018, 35, 1–9. [Google Scholar] [CrossRef] [Green Version]
  110. Akbar, A.; Kuanar, A.; Patnaik, J.; Mishra, A.; Nayak, S. Application of Artificial Neural Network modeling for optimization and prediction of essential oil yield in turmeric (Curcuma longa L.). Comput. Electron. Agric. 2018, 148, 160–178. [Google Scholar] [CrossRef]
  111. Zeng, W.; Xu, C.; Gang, Z.H.A.O.; Wu, J.; Huang, J. Estimation of Sunflower Seed Yield Using Partial Least Squares Regression and Artificial Neural Network Models. Pedosphere 2018, 28, 764–774. [Google Scholar] [CrossRef]
  112. Pourmohammadali, B.; Hosseinifard, S.J.; Hassan Salehi, M.; Shirani, H.; Esfandiarpour Boroujeni, I. Effects of soil properties, water quality and management practices on pistachio yield in Rafsanjan region, southeast of Iran. Agric. Water Manag. 2019, 213, 894–902. [Google Scholar] [CrossRef]
  113. Maya Gopal, P.S.; Bhargavi, R. Performance Evaluation of Best Feature Subsets for Crop Yield Prediction Using Machine Learning Algorithms. Appl. Artif. Intell. 2019, 33, 621–642. [Google Scholar] [CrossRef]
  114. Gómez, D.; Salvador, P.; Sanz, J.; Casanova, J.L. Potato Yield Prediction Using Machine Learning Techniques and Sentinel 2 Data. Remote Sens. 2019, 11, 1745. [Google Scholar] [CrossRef] [Green Version]
  115. Cai, Y.; Guan, K.; Lobell, D.; Potgieter, A.B.; Wang, S.; Peng, J.; Xu, T.; Asseng, S.; Zhang, Y.; You, L.; et al. Integrating satellite and climate data to predict wheat yield in Australia using machine learning approaches. Agric. For. Meteorol. 2019, 274, 144–159. [Google Scholar] [CrossRef]
  116. Kim, N.; Ha, K.-J.; Park, N.-W.; Cho, J.; Hong, S.; Lee, Y.-W. A Comparison Between Major Artificial Intelligence Models for Crop Yield Prediction: Case Study of the Midwestern United States, 2006–2015. ISPRS Int. J. Geo-Inf. 2019, 8, 240. [Google Scholar] [CrossRef] [Green Version]
  117. Nevavuori, P.; Narra, N.; Lipping, T. Crop yield prediction with deep convolutional neural networks. Comput. Electron. Agric. 2019, 163, 104859. [Google Scholar] [CrossRef]
  118. Chen, Y.; Lee, W.S.; Gan, H.; Peres, N.; Fraisse, C.; Zhang, Y.; He, Y. Strawberry Yield Prediction Based on a Deep Neural Network Using High-Resolution Aerial Orthoimages. Remote Sens. 2019, 11, 1584. [Google Scholar] [CrossRef] [Green Version]
  119. Maya Gopal, P.S.; Bhargavi, R. A novel approach for efficient crop yield prediction. Comput. Electron. Agric. 2019, 165, 104968. [Google Scholar] [CrossRef]
  120. Sun, J.; Di, L.; Sun, Z.; Shen, Y.; Lai, Z. County-level soybean yield prediction using deep CNN-LSTM model. Sensors 2019, 19, 4363. [Google Scholar] [CrossRef] [Green Version]
  121. Kayad, A.; Sozzi, M.; Gatto, S.; Marinello, F.; Pirotti, F. Monitoring Within-Field Variability of Corn Yield using Sentinel-2 and Machine Learning Techniques. Remote Sens. 2019, 11, 2873. [Google Scholar] [CrossRef] [Green Version]
  122. Gutiérrez, S.; Wendel, A.; Underwood, J. Spectral filter design based on in-field hyperspectral imaging and machine learning for mango ripeness estimation. Comput. Electron. Agric. 2019, 164, 104890. [Google Scholar] [CrossRef]
  123. Filippi, P.; Jones, E.J.; Wimalathunge, N.S.; Somarathna, P.D.S.N.; Pozza, L.E.; Ugbaje, S.U.; Jephcott, T.G.; Paterson, S.E.; Whelan, B.M.; Bishop, T.F.A. An approach to forecast grain crop yield using multi-layered, multi-farm data sets and machine learning. Precis. Agric. 2019, 20, 1015–1029. [Google Scholar] [CrossRef]
  124. Shidnal, S.; Latte, M.V.; Kapoor, A. Crop yield prediction: Two-tiered machine learning model approach. Int. J. Inf. Technol. 2019, 1–9. [Google Scholar] [CrossRef]
  125. Yang, Q.; Shi, L.; Han, J.; Zha, Y.; Zhu, P. Deep convolutional neural networks for rice grain yield estimation at the ripening stage using UAV-based remotely sensed images. Field Crop. Res. 2019, 235, 142–153. [Google Scholar] [CrossRef]
  126. Leroux, L.; Castets, M.; Baron, C.; Escorihuela, M.J.; Bégué, A.; Lo Seen, D. Maize yield estimation in West Africa from crop process-induced combinations of multi-domain remote sensing indices. Eur. J. Agron. 2019, 108, 11–26. [Google Scholar] [CrossRef]
  127. Abrougui, K.; Gabsi, K.; Mercatoris, B.; Khemis, C.; Amami, R.; Chehaibi, S. Prediction of organic potato yield using tillage systems and soil properties by artificial neural network (ANN) and multiple linear regressions (MLR). Soil Tillage Res. 2019, 190, 202–208. [Google Scholar] [CrossRef]
  128. Folberth, C.; Baklanov, A.; Balkovič, J.; Skalský, R.; Khabarov, N.; Obersteiner, M. Spatio-temporal downscaling of gridded crop model yield estimates based on machine learning. Agric. For. Meteorol. 2019, 264, 1–15. [Google Scholar] [CrossRef] [Green Version]
  129. Schwalbert, R.A.; Amado, T.; Corassa, G.; Pott, L.P.; Prasad, P.V.V.; Ciampitti, I.A. Satellite-based soybean yield forecast: Integrating machine learning and weather data for improving crop yield prediction in southern Brazil. Agric. For. Meteorol. 2020, 284, 107886. [Google Scholar] [CrossRef]
  130. Abbas, F.; Afzaal, H.; Farooque, A.A.; Tang, S. Crop yield prediction through proximal sensing and machine learning algorithms. Agronomy 2020, 10, 1046. [Google Scholar] [CrossRef]
  131. Khosla, E.; Dharavath, R.; Priya, R. Crop yield prediction using aggregated rainfall-based modular artificial neural networks and support vector regression. Environ. Dev. Sustain. 2020, 22, 5687–5708. [Google Scholar] [CrossRef]
  132. Han, J.; Zhang, Z.; Cao, J.; Luo, Y.; Zhang, L.; Li, Z.; Zhang, J. Prediction of Winter Wheat Yield Based on Multi-Source Data and Machine Learning in China. Remote Sens. 2020, 12, 236. [Google Scholar] [CrossRef] [Green Version]
  133. Mupangwa, W.; Chipindu, L.; Nyagumbo, I.; Mkuhlani, S.; Sisito, G. Evaluating machine learning algorithms for predicting maize yield under conservation agriculture in Eastern and Southern Africa. SN Appl. Sci. 2020, 2, 1–14. [Google Scholar] [CrossRef]
  134. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020, 237, 111599. [Google Scholar] [CrossRef]
  135. Khaki, S.; Wang, L.; Archontoulis, S.V. A CNN-RNN Framework for Crop Yield Prediction. Front. Plant Sci. 2020, 10, 1750. [Google Scholar] [CrossRef]
  136. Ballesteros, R.; Intrigliolo, D.S.; Ortega, J.F.; Ramírez-Cuesta, J.M.; Buesa, I.; Moreno, M.A. Vineyard yield estimation by combining remote sensing, computer vision and artificial neural network techniques. Precis. Agric. 2020, 21, 1242–1262. [Google Scholar] [CrossRef]
  137. Son, N.T.; Chen, C.F.; Chen, C.R.; Guo, H.Y.; Cheng, Y.S.; Chen, S.L.; Lin, H.S.; Chen, S.H. Machine learning approaches for rice crop yield predictions using time-series satellite data in Taiwan. Int. J. Remote Sens. 2020, 41, 7868–7888. [Google Scholar] [CrossRef]
  138. Barbosa, A.; Trevisan, R.; Hovakimyan, N.; Martin, N.F. Modeling yield response to crop management using convolutional neural networks. Comput. Electron. Agric. 2020, 170, 105197. [Google Scholar] [CrossRef]
  139. Murali, P.; Revathy, R.; Balamurali, S.; Tayade, A.S. Integration of RNN with GARCH refined by whale optimization algorithm for yield forecasting: A hybrid machine learning approach. J. Ambient Intell. Hum. Comput. 2020, 1–13. [Google Scholar] [CrossRef]
  140. Kamir, E.; Waldner, F.; Hochman, Z. Estimating wheat yields in Australia using climate records, satellite image time series and machine learning methods. ISPRS J. Photogramm. Remote Sens. 2020, 160, 124–135. [Google Scholar] [CrossRef]
  141. Saranya, C.P.; Nagarajan, N. Efficient agricultural yield prediction using metaheuristic optimized artificial neural network using Hadoop framework. Soft Comput. 2020, 24, 12659–12669. [Google Scholar] [CrossRef]
  142. Kim, N.; Na, S.-I.; Park, C.-W.; Huh, M.; Oh, J.; Ha, K.-J.; Cho, J.; Lee, Y.-W. An Artificial Intelligence Approach to Prediction of Corn Yields under Extreme Weather Conditions Using Satellite and Meteorological Data. Appl. Sci. 2020, 10, 3785. [Google Scholar] [CrossRef]
  143. Amaratunga, V.; Wickramasinghe, L.; Perera, A.; Jayasinghe, J.; Rathnayake, U.; Zhou, J.G. Artificial Neural Network to Estimate the Paddy Yield Prediction Using Climatic Data. Math. Probl. Eng. 2020, 2020. [Google Scholar] [CrossRef]
  144. Shahhosseini, M.; Hu, G.; Archontoulis, S.V. Forecasting Corn Yield with Machine Learning Ensembles. Front. Plant Sci. 2020, 11, 1120. [Google Scholar] [CrossRef]
  145. Mwaura, J.I.; Kenduiywo, B.K. County level maize yield estimation using artificial neural network. Model. Earth Syst. Environ. 2020, 1–8. [Google Scholar] [CrossRef]
  146. Dang, C.; Liu, Y.; Yue, H.; Qian, J.X.; Zhu, R. Autumn Crop Yield Prediction using Data-Driven Approaches:- Support Vector Machines, Random Forest, and Deep Neural Network Methods. Can. J. Remote Sens. 2020. [Google Scholar] [CrossRef]
  147. Fu, Z.; Jiang, J.; Gao, Y.; Krienke, B.; Wang, M.; Zhong, K.; Cao, Q.; Tian, Y.; Zhu, Y.; Cao, W.; et al. Wheat Growth Monitoring and Yield Estimation based on Multi-Rotor Unmanned Aerial Vehicle. Remote Sens. 2020, 12, 508. [Google Scholar] [CrossRef] [Green Version]
  148. Leo, S.; Migliorati, M.D.A.; Grace, P.R. Predicting within-field cotton yields using publicly available datasets and machine learning. Agron. J. 2020, 1150–1163. [Google Scholar] [CrossRef]
  149. Prasad, N.R.; Patel, N.R.; Danodia, A. Crop yield prediction in cotton for regional level using random forest approach. Spat. Inf. Res. 2020, 1–12. [Google Scholar] [CrossRef]
  150. Wan, L.; Cen, H.; Zhu, J.; Zhang, J.; Zhu, Y.; Sun, D.; Du, X.; Zhai, L.; Weng, H.; Li, Y.; et al. Grain yield prediction of rice using multi-temporal UAV-based RGB and multispectral images and model transfer—A case study of small farmlands in the South of China. Agric. For. Meteorol. 2020, 291, 108096. [Google Scholar] [CrossRef]
  151. Eugenio, F.C.; Grohs, M.; Venancio, L.P.; Schuh, M.; Bottega, E.L.; Ruoso, R.; Schons, C.; Mallmann, C.L.; Badin, T.L.; Fernandes, P. Estimation of soybean yield from machine learning techniques and multispectral RPAS imagery. Remote Sens. Appl. Soc. Environ. 2020, 20, 100397. [Google Scholar] [CrossRef]
  152. Salvador, P.; Gómez, D.; Sanz, J.; Casanova, J.L. Estimation of Potato Yield Using Satellite Data at a Municipal Level: A Machine Learning Approach. ISPRS Int. J. Geo-Inf. 2020, 9, 343. [Google Scholar] [CrossRef]
  153. Rahman, M.M.; Robson, A. Integrating Landsat-8 and Sentinel-2 Time Series Data for Yield Prediction of Sugarcane Crops at the Block Level. Remote Sens. 2020, 12, 1313. [Google Scholar] [CrossRef] [Green Version]
  154. Ashapure, A.; Jung, J.; Chang, A.; Oh, S.; Yeom, J.; Maeda, M.; Maeda, A.; Dube, N.; Landivar, J.; Hague, S.; et al. Developing a machine learning based cotton yield estimation framework using multi-temporal UAS data. ISPRS J. Photogramm. Remote Sens. 2020, 169, 180–194. [Google Scholar] [CrossRef]
  155. Nesarani, A.; Ramar, R.; Pandian, S. An efficient approach for rice prediction from authenticated Block chain node using machine learning technique. Environ. Technol. Innov. 2020, 20, 101064. [Google Scholar] [CrossRef]
  156. Barzin, R.; Pathak, R.; Lotfi, H.; Varco, J.; Bora, G.C. Use of UAS Multispectral Imagery at Different Physiological Stages for Yield Prediction and Input Resource Optimization in Corn. Remote Sens. 2020, 12, 2392. [Google Scholar] [CrossRef]
  157. Sun, J.; Lai, Z.; Di, L.; Sun, Z.; Tao, J.; Shen, Y. Multilevel Deep Learning Network for County-Level Corn Yield Estimation in the U.S. Corn Belt. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5048–5060. [Google Scholar] [CrossRef]
  158. Jiang, Z.; Liu, C.; Ganapathysubramanian, B.; Hayes, D.J.; Sarkar, S. Predicting county-scale maize yields with publicly available data. Sci. Rep. 2020, 10, 1–12. [Google Scholar] [CrossRef]
  159. Elavarasan, D.; Vincent P M, D.R.; Srinivasan, K.; Chang, C.-Y. A Hybrid CFS Filter and RF-RFE Wrapper-Based Feature Extraction for Enhanced Agricultural Crop Yield Prediction Modeling. Agriculture 2020, 10, 400. [Google Scholar] [CrossRef]
  160. Feng, L.; Zhang, Z.; Ma, Y.; Du, Q.; Williams, P.; Drewry, J.; Luck, B. Alfalfa Yield Prediction Using UAV-Based Hyperspectral Imagery and Ensemble Learning. Remote Sens. 2020, 12, 2028. [Google Scholar] [CrossRef]
  161. Guo, Y.; Wang, H.; Wu, Z.; Wang, S.; Sun, H.; Senthilnath, J.; Wang, J.; Robin Bryant, C.; Fu, Y. Modified Red Blue Vegetation Index for Chlorophyll Estimation and Yield Prediction of Maize from Visible Images Captured by UAV. Sensors 2020, 20, 55. [Google Scholar] [CrossRef]
  162. Khan, M.S.; Semwal, M.; Sharma, A.; Verma, R.K. An artificial neural network model for estimating Mentha crop biomass yield using Landsat 8 OLI. Precis. Agric. 2020, 21, 18–33. [Google Scholar] [CrossRef]
  163. Zhou, X.; Kono, Y.; Win, A.; Matsui, T.; Tanaka, T.S.T. Predicting within-field variability in grain yield and protein content of winter wheat using UAV-based multispectral imagery and machine learning approaches. Plant Prod. Sci. 2020, 1–15. [Google Scholar] [CrossRef]
  164. Marques Ramos, A.P.; Prado Osco, L.; Elis Garcia Furuya, D.; Nunes Gonçalves, W.; Cordeiro Santana, D.; Pereira Ribeiro Teodoro, L.; Antonio da Silva Junior, C.; Fernando Capristo-Silva, G.; Li, J.; Henrique Rojo Baio, F.; et al. A random forest ranking approach to predict yield in maize with uav-based vegetation spectral indices. Comput. Electron. Agric. 2020, 178, 105791. [Google Scholar] [CrossRef]
  165. Sun, C.; Feng, L.; Zhang, Z.; Ma, Y.; Crosby, T.; Naber, M.; Wang, Y. Prediction of End-Of-Season Tuber Yield and Tuber Set in Potatoes Using In-Season UAV-Based Hyperspectral Imagery and Machine Learning. Sensors 2020, 20, 5293. [Google Scholar] [CrossRef]
  166. Wei, M.C.F.; Maldaner, L.F.; Ottoni, P.M.N.; Molin, J.P. Carrot Yield Mapping: A Precision Agriculture Approach Based on Machine Learning. AI 2020, 1, 229–241. [Google Scholar] [CrossRef]
  167. Da Silva, E.E.; Rojo Baio, F.H.; Ribeiro Teodoro, L.P.; da Silva Junior, C.A.; Borges, R.S.; Teodoro, P.E. UAV-multispectral and vegetation indices in soybean grain yield prediction based on in situ observation. Remote Sens. Appl. Soc. Environ. 2020, 18, 100318. [Google Scholar] [CrossRef]
  168. Wang, X.; Huang, J.; Feng, Q.; Yin, D. Winter Wheat Yield Prediction at County Level and Uncertainty Analysis in Main Wheat-Producing Regions of China with Deep Learning Approaches. Remote Sens. 2020, 12, 1744. [Google Scholar] [CrossRef]
  169. Li, B.; Xu, X.; Zhang, L.; Han, J.; Bian, C.; Li, G.; Liu, J.; Jin, L. Above-ground biomass estimation and yield prediction in potato by using UAV-based RGB and hyperspectral imaging. ISPRS J. Photogramm. Remote Sens. 2020, 162, 161–172. [Google Scholar] [CrossRef]
  170. Wang, Y.; Zhang, Z.; Feng, L.; Du, Q.; Runge, T. Combining Multi-Source Data and Machine Learning Approaches to Predict Winter Wheat Yield in the Conterminous United States. Remote Sens. 2020, 12, 1232. [Google Scholar] [CrossRef] [Green Version]
  171. Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018. [Google Scholar] [CrossRef]
  172. Pineda, M.; Pérez-Bueno, M.L.; Barón, M. Detection of Bacterial Infection in Melon Plants by Classification Methods Based on Imaging Data. Front. Plant Sci. 2018, 9, 164. [Google Scholar] [CrossRef] [Green Version]
  173. Fuentes, A.F.; Yoon, S.; Lee, J.; Park, D.S. High-Performance Deep Neural Network-Based Tomato Plant Diseases and Pests Diagnosis System with Refinement Filter Bank. Front. Plant Sci. 2018, 9, 1162. [Google Scholar] [CrossRef] [Green Version]
  174. Abdulridha, J.; Ampatzidis, Y.; Ehsani, R.; de Castro, A.I. Evaluating the performance of spectral features and multivariate analysis tools to detect laurel wilt disease and nutritional deficiency in avocado. Comput. Electron. Agric. 2018, 155, 203–211. [Google Scholar] [CrossRef]
  175. Barbedo, J.G.A. Factors influencing the use of deep learning for plant disease recognition. Biosyst. Eng. 2018, 172, 84–91. [Google Scholar] [CrossRef]
  176. Tamouridou, A.; Pantazi, X.; Alexandridis, T.; Lagopodi, A.; Kontouris, G.; Moshou, D. Spectral Identification of Disease in Weeds Using Multilayer Perceptron with Automatic Relevance Determination. Sensors 2018, 18, 2770. [Google Scholar] [CrossRef] [Green Version]
  177. Lu, J.; Ehsani, R.; Shi, Y.; de Castro, A.I.; Wang, S. Detection of multi-tomato leaf diseases (late blight, target and bacterial spots) in different stages by using a spectral-based sensor. Sci. Rep. 2018, 8, 1–11. [Google Scholar] [CrossRef] [Green Version]
  178. Zhang, X.; Qiao, Y.; Meng, F.; Fan, C.; Zhang, M. Identification of maize leaf diseases using improved deep convolutional neural networks. IEEE Access 2018, 6, 30370–30377. [Google Scholar] [CrossRef]
  179. Chouhan, S.S.; Kaul, A.; Singh, U.P.; Jain, S. Bacterial foraging optimization based radial basis function neural network (BRBFNN) for identification and classification of plant leaf diseases: An automatic approach towards plant pathology. IEEE Access 2018, 6, 8852–8863. [Google Scholar] [CrossRef]
  180. Sharif, M.; Khan, M.A.; Iqbal, Z.; Azam, M.F.; Lali, M.I.U.; Javed, M.Y. Detection and classification of citrus diseases in agriculture based on optimized weighted segmentation and feature selection. Comput. Electron. Agric. 2018, 150, 220–234. [Google Scholar] [CrossRef]
  181. Kerkech, M.; Hafiane, A.; Canals, R. Deep leaning approach with colorimetric spaces and vegetation indices for vine diseases detection in UAV images. Comput. Electron. Agric. 2018, 155, 237–243. [Google Scholar] [CrossRef]
  182. Kaur, S.; Pandey, S.; Goel, S. Semi-automatic leaf disease detection and classification system for soybean culture. IET Image Process. 2018, 12, 1038–1048. [Google Scholar] [CrossRef]
  183. Coulibaly, S.; Kamsu-Foguem, B.; Kamissoko, D.; Traore, D. Deep neural networks with transfer learning in millet crop images. Comput. Ind. 2019, 108, 115–120. [Google Scholar] [CrossRef] [Green Version]
  184. Wu, H.; Wiesner-Hanks, T.; Stewart, E.L.; DeChant, C.; Kaczmar, N.; Gore, M.A.; Nelson, R.J.; Lipson, H. Autonomous Detection of Plant Disease Symptoms Directly from Aerial Imagery. Plant Phenome J. 2019, 2, 1–9. [Google Scholar] [CrossRef]
  185. Huang, H.; Deng, J.; Lan, Y.; Yang, A.; Zhang, L.; Wen, S.; Zhang, H.; Zhang, Y.; Deng, Y. Detection of Helminthosporium Leaf Blotch Disease Based on UAV Imagery. Appl. Sci. 2019, 9, 558. [Google Scholar] [CrossRef] [Green Version]
  186. Abdulridha, J.; Ehsani, R.; Abd-Elrahman, A.; Ampatzidis, Y. A remote sensing technique for detecting laurel wilt disease in avocado in presence of other biotic and abiotic stresses. Comput. Electron. Agric. 2019, 156, 549–557. [Google Scholar] [CrossRef]
  187. Dhingra, G.; Kumar, V.; Joshi, H.D. A novel computer vision based neutrosophic approach for leaf disease identification and classification. Meas. J. Int. Meas. Confed. 2019, 135, 782–794. [Google Scholar] [CrossRef] [Green Version]
  188. Arnal Barbedo, J.G. Plant disease identification from individual lesions and spots using deep learning. Biosyst. Eng. 2019, 180, 96–107. [Google Scholar] [CrossRef]
  189. Hu, G.; Wu, H.; Zhang, Y.; Wan, M. A low shot learning method for tea leaf’s disease identification. Comput. Electron. Agric. 2019, 163, 104852. [Google Scholar] [CrossRef]
  190. Zhang, X.; Han, L.; Dong, Y.; Shi, Y.; Huang, W.; Han, L.; González-Moreno, P.; Ma, H.; Ye, H.; Sobeih, T. A Deep Learning-Based Approach for Automated Yellow Rust Disease Detection from High-Resolution Hyperspectral UAV Images. Remote Sens. 2019, 11, 1554. [Google Scholar] [CrossRef] [Green Version]
  191. Cruz, A.; Ampatzidis, Y.; Pierro, R.; Materazzi, A.; Panattoni, A.; De Bellis, L.; Luvisi, A. Detection of grapevine yellows symptoms in Vitis vinifera L. with artificial intelligence. Comput. Electron. Agric. 2019, 157, 63–76. [Google Scholar] [CrossRef]
  192. Wiesner-Hanks, T.; Wu, H.; Stewart, E.; DeChant, C.; Kaczmar, N.; Lipson, H.; Gore, M.A.; Nelson, R.J. Millimeter-Level Plant Disease Detection from Aerial Photographs via Deep Learning and Crowdsourced Data. Front. Plant Sci. 2019, 10, 1550. [Google Scholar] [CrossRef] [Green Version]
  193. Ozguven, M.M.; Adem, K. Automatic detection and classification of leaf spot disease in sugar beet using deep learning algorithms. Phys. A Stat. Mech. Appl. 2019, 535, 122537. [Google Scholar] [CrossRef]
  194. Geetharamani, G.; Arun Pandian, J. Identification of plant leaf diseases using a nine-layer deep convolutional neural network. Comput. Electr. Eng. 2019, 76, 323–338. [Google Scholar] [CrossRef]
  195. Sultan Mahmud, M.; Zaman, Q.U.; Esau, T.J.; Price, G.W.; Prithiviraj, B. Development of an artificial cloud lighting condition system using machine vision for strawberry powdery mildew disease detection. Comput. Electron. Agric. 2019, 158, 219–225. [Google Scholar] [CrossRef]
  196. Arsenovic, M.; Karanovic, M.; Sladojevic, S.; Anderla, A.; Stefanovic, D. Solving Current Limitations of Deep Learning Based Approaches for Plant Disease Detection. Symmetry 2019, 11, 939. [Google Scholar] [CrossRef] [Green Version]
  197. Abdulridha, J.; Batuman, O.; Ampatzidis, Y. UAV-based remote sensing technique to detect citrus canker disease utilizing hyperspectral imaging and machine learning. Remote Sens. 2019, 11, 1373. [Google Scholar] [CrossRef] [Green Version]
  198. Pantazi, X.E.; Moshou, D.; Tamouridou, A.A. Automated leaf disease detection in different crop species through image features analysis and One Class Classifiers. Comput. Electron. Agric. 2019, 156, 96–104. [Google Scholar] [CrossRef]
  199. Picon, A.; Alvarez-Gila, A.; Seitz, M.; Ortiz-Barredo, A.; Echazarra, J.; Johannes, A. Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild. Comput. Electron. Agric. 2019, 161, 280–290. [Google Scholar] [CrossRef]
  200. Al-Saddik, H.; Simon, J.C.; Cointault, F. Assessment of the optimal spectral bands for designing a sensor for vineyard disease detection: The case of ‘Flavescence dorée’. Precis. Agric. 2019, 20, 398–422. [Google Scholar] [CrossRef]
  201. Habib, M.T.; Majumder, A.; Jakaria, A.Z.M.; Akter, M.; Uddin, M.S.; Ahmed, F. Machine vision based papaya disease recognition. J. King Saud Univ. Comput. Inf. Sci. 2020, 32, 300–309. [Google Scholar] [CrossRef]
  202. Ramesh, S.; Vydeki, D. Recognition and classification of paddy leaf diseases using Optimized Deep Neural network with Jaya algorithm. Inf. Process. Agric. 2020, 7, 249–260. [Google Scholar] [CrossRef]
  203. Abdulridha, J.; Ampatzidis, Y.; Kakarla, S.C.; Roberts, P. Detection of target spot and bacterial spot diseases in tomato using UAV-based and benchtop-based hyperspectral imaging techniques. Precis. Agric. 2020, 21, 955–978. [Google Scholar] [CrossRef]
  204. Abdulridha, J.; Ampatzidis, Y.; Roberts, P.; Kakarla, S.C. Detecting powdery mildew disease in squash at different stages using UAV-based hyperspectral imaging and artificial intelligence. Biosyst. Eng. 2020, 197, 135–148. [Google Scholar] [CrossRef]
  205. Abdulridha, J.; Ampatzidis, Y.; Qureshi, J.; Roberts, P. Laboratory and UAV-based identification and classification of tomato yellow leaf curl, bacterial spot, and target spot diseases in tomato utilizing hyperspectral imaging and machine learning. Remote Sens. 2020, 12, 2732. [Google Scholar] [CrossRef]
  206. Agarwal, M.; Gupta, S.K.; Biswas, K.K. Development of Efficient CNN model for Tomato crop disease identification. Sustain. Comput. Inform. Syst. 2020, 28, 100407. [Google Scholar] [CrossRef]
  207. Cristin, R.; Kumar, B.S.; Priya, C.; Karthick, K. Deep neural network based Rider-Cuckoo Search Algorithm for plant disease detection. Artif. Intell. Rev. 2020, 53, 4993–5018. [Google Scholar] [CrossRef]
  208. Kerkech, M.; Hafiane, A.; Canals, R. Vine disease detection in UAV multispectral images using optimized image registration and deep learning segmentation approach. Comput. Electron. Agric. 2020, 174, 105446. [Google Scholar] [CrossRef]
  209. Li, D.; Wang, R.; Xie, C.; Liu, L.; Zhang, J.; Li, R.; Wang, F.; Zhou, M.; Liu, W. A Recognition Method for Rice Plant Diseases and Pests Video Detection Based on Deep Convolutional Neural Network. Sensors 2020, 20, 578. [Google Scholar] [CrossRef] [Green Version]
  210. Sambasivam, G.; Opiyo, G.D. A predictive machine learning application in agriculture: Cassava disease detection and classification with imbalanced dataset using convolutional neural networks. Egypt. Inform. J. 2020. [Google Scholar] [CrossRef]
  211. Gomez Selvaraj, M.; Vergara, A.; Montenegro, F.; Alonso Ruiz, H.; Safari, N.; Raymaekers, D.; Ocimati, W.; Ntamwira, J.; Tits, L.; Omondi, A.B.; et al. Detection of banana plants and their major diseases through aerial images and machine learning methods: A case study in DR Congo and Republic of Benin. ISPRS J. Photogramm. Remote Sens. 2020, 169, 110–124. [Google Scholar] [CrossRef]
  212. Karthik, R.; Hariharan, M.; Anand, S.; Mathikshara, P.; Johnson, A.; Menaka, R. Attention embedded residual CNN for disease detection in tomato leaves. Appl. Soft Comput. J. 2020, 86, 105933. [Google Scholar] [CrossRef]
  213. Karadağ, K.; Tenekeci, M.E.; Taşaltın, R.; Bilgili, A. Detection of pepper fusarium disease using machine learning algorithms based on spectral reflectance. Sustain. Comput. Inform. Syst. 2020, 28, 100299. [Google Scholar] [CrossRef]
  214. Sharma, P.; Berwal, Y.P.S.; Ghai, W. Performance analysis of deep learning CNN models for disease detection in plants using image segmentation. Inf. Process. Agric. 2020, 7, 566–574. [Google Scholar] [CrossRef]
  215. Lan, Y.; Huang, Z.; Deng, X.; Zhu, Z.; Huang, H.; Zheng, Z.; Lian, B.; Zeng, G.; Tong, Z. Comparison of machine learning methods for citrus greening detection on UAV multispectral images. Comput. Electron. Agric. 2020, 171, 105234. [Google Scholar] [CrossRef]
  216. Khalili, E.; Kouchaki, S.; Ramazi, S.; Ghanati, F. Machine Learning Techniques for Soybean Charcoal Rot Disease Prediction. Front. Plant Sci. 2020, 11, 2009. [Google Scholar] [CrossRef]
  217. Zhang, Z.; Flores, P.; Igathinathane, C.; Naik, D.L.; Kiran, R.; Ransom, J.K. Wheat Lodging Detection from UAS Imagery Using Machine Learning Algorithms. Remote Sens. 2020, 12, 1838. [Google Scholar] [CrossRef]
  218. Bhatia, A.; Chug, A.; Prakash Singh, A. Application of extreme learning machine in plant disease prediction for highly imbalanced dataset. J. Stat. Manag. Syst. 2020, 23, 1059–1068. [Google Scholar] [CrossRef]
  219. Karlekar, A.; Seal, A. SoyNet: Soybean leaf diseases classification. Comput. Electron. Agric. 2020, 172, 105342. [Google Scholar] [CrossRef]
  220. Abdu, A.M.; Mokji, M.M.; Sheikh, U.U. Automatic vegetable disease identification approach using individual lesion features. Comput. Electron. Agric. 2020, 176, 105660. [Google Scholar] [CrossRef]
  221. Hernández, S.; López, J.L. Uncertainty quantification for plant disease detection using Bayesian deep learning. Appl. Soft Comput. J. 2020, 96, 106597. [Google Scholar] [CrossRef]
  222. Da Rocha Miranda, J.; de Carvalho Alves, M.; Ampelio Pozza, E.; Santos Neto, H. Detection of coffee berry necrosis by digital image processing of landsat 8 oli satellite imagery. Int. J. Appl. Earth Obs. Geoinf. 2020, 85, 101983. [Google Scholar] [CrossRef]
  223. Zhang, Y.; Song, C.; Zhang, D. Deep Learning-Based Object Detection Improvement for Tomato Disease. IEEE Access 2020, 8, 56607–56614. [Google Scholar] [CrossRef]
  224. Darwish, A.; Ezzat, D.; Hassanien, A.E. An optimized model based on convolutional neural networks and orthogonal learning particle swarm optimization algorithm for plant diseases diagnosis. Swarm Evol. Comput. 2020, 52, 100616. [Google Scholar] [CrossRef]
  225. Xie, X.; Ma, Y.; Liu, B.; He, J.; Li, S.; Wang, H. A Deep-Learning-Based Real-Time Detector for Grape Leaf Diseases Using Improved Convolutional Neural Networks. Front. Plant Sci. 2020, 11, 751. [Google Scholar] [CrossRef]
  226. Chen, M.; Brun, F.; Raynal, M.; Makowski, D. Forecasting severe grape downy mildew attacks using machine learning. PLoS ONE 2020, 15, e0230254. [Google Scholar] [CrossRef]
  227. Sun, J.; Yang, Y.; He, X.; Wu, X. Northern Maize Leaf Blight Detection under Complex Field Environment Based on Deep Learning. IEEE Access 2020, 8, 33679–33688. [Google Scholar] [CrossRef]
  228. Kim, W.S.; Lee, D.H.; Kim, Y.J. Machine vision-based automatic disease symptom detection of onion downy mildew. Comput. Electron. Agric. 2020, 168, 105099. [Google Scholar] [CrossRef]
  229. Velásquez, D.; Sánchez, A.; Sarmiento, S.; Toro, M.; Maiza, M.; Sierra, B. A Method for Detecting Coffee Leaf Rust through Wireless Sensor Networks, Remote Sensing, and Deep Learning: Case Study of the Caturra Variety in Colombia. Appl. Sci. 2020, 10, 697. [Google Scholar] [CrossRef] [Green Version]
  230. Verma, S.; Chug, A.; Singh, A.P. Application of convolutional neural networks for evaluation of disease severity in tomato plant. J. Discret. Math. Sci. Cryptogr. 2020, 23, 273–282. [Google Scholar] [CrossRef]
  231. He, Y.; Zhou, Z.; Tian, L.; Liu, Y.; Luo, X. Brown rice planthopper (Nilaparvata lugens Stal) detection based on deep learning. Precis. Agric. 2020, 21, 1385–1402. [Google Scholar] [CrossRef]
  232. Kerkech, M.; Hafiane, A.; Canals, R. VddNet: Vine Disease Detection Network Based on Multispectral Images and Depth Map. Remote Sens. 2020, 12, 3305. [Google Scholar] [CrossRef]
  233. Yan, Q.; Yang, B.; Wang, W.; Wang, B.; Chen, P.; Zhang, J. Apple Leaf Diseases Recognition Based on An Improved Convolutional Neural Network. Sensors 2020, 20, 3535. [Google Scholar] [CrossRef] [PubMed]
  234. Wang, T.; Thomasson, J.A.; Yang, C.; Isakeit, T.; Nichols, R.L. Automatic Classification of Cotton Root Rot Disease Based on UAV Remote Sensing. Remote Sens. 2020, 12, 1310. [Google Scholar] [CrossRef] [Green Version]
  235. Ahmad, J.; Muhammad, K.; Ahmad, I.; Ahmad, W.; Smith, M.L.; Smith, L.N.; Jain, D.K.; Wang, H.; Mehmood, I. Visual features based boosted classification of weeds for real-time selective herbicide sprayer systems. Comput. Ind. 2018, 98, 23–33. [Google Scholar] [CrossRef]
  236. Bah, M.D.; Hafiane, A.; Canals, R. Deep learning with unsupervised data labeling for weed detection in line crops in UAV images. Remote Sens. 2018, 10, 1690. [Google Scholar] [CrossRef] [Green Version]
  237. Bakhshipour, A.; Jafari, A. Evaluation of support vector machine and artificial neural networks in weed detection using shape features. Comput. Electron. Agric. 2018, 145, 153–160. [Google Scholar] [CrossRef]
  238. Barrero, O.; Perdomo, S.A. RGB and multispectral UAV image fusion for Gramineae weed detection in rice fields. Precis. Agric. 2018, 19, 809–822. [Google Scholar] [CrossRef]
  239. Chavan, T.R.; Nandedkar, A.V. AgroAVNET for crops and weeds classification: A step forward in automatic farming. Comput. Electron. Agric. 2018, 154, 361–372. [Google Scholar] [CrossRef]
  240. De Castro, A.; Torres-Sánchez, J.; Peña, J.; Jiménez-Brenes, F.; Csillik, O.; López-Granados, F. An Automatic Random Forest-OBIA Algorithm for Early Weed Mapping between and within Crop Rows Using UAV Imagery. Remote Sens. 2018, 10, 285. [Google Scholar] [CrossRef] [Green Version]
  241. Gao, J.; Nuyttens, D.; Lootens, P.; He, Y.; Pieters, J.G. Recognising weeds in a maize crop using a random forest machine-learning algorithm and near-infrared snapshot mosaic hyperspectral imagery. Biosyst. Eng. 2018, 170, 39–50. [Google Scholar] [CrossRef]
  242. Gao, J.; Liao, W.; Nuyttens, D.; Lootens, P.; Vangeyte, J.; Pižurica, A.; He, Y.; Pieters, J.G. Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery. Int. J. Appl. Earth Obs. Geoinf. 2018, 67, 43–53. [Google Scholar] [CrossRef]
  243. Huang, H.; Deng, J.; Lan, Y.; Yang, A.; Deng, X.; Wen, S.; Zhang, H.; Zhang, Y. Accurate Weed Mapping and Prescription Map Generation Based on Fully Convolutional Networks Using UAV Imagery. Sensors 2018, 18, 3299. [Google Scholar] [CrossRef] [Green Version]
  244. Louargant, M.; Jones, G.; Faroux, R.; Paoli, J.-N.; Maillot, T.; Gée, C.; Villette, S. Unsupervised Classification Algorithm for Early Weed Detection in Row-Crops by Combining Spatial and Spectral Information. Remote Sens. 2018, 10, 761. [Google Scholar] [CrossRef] [Green Version]
  245. Sa, I.; Popović, M.; Khanna, R.; Chen, Z.; Lottes, P.; Liebisch, F.; Nieto, J.; Stachniss, C.; Walter, A.; Siegwart, R. WeedMap: A Large-Scale Semantic Weed Mapping Framework Using Aerial Multispectral Imaging and Deep Neural Network for Precision Farming. Remote Sens. 2018, 10, 1423. [Google Scholar] [CrossRef] [Green Version]
  246. Sabzi, S.; Abbaspour-Gilandeh, Y. Using video processing to classify potato plant and three types of weed using hybrid of artificial neural network and partincle swarm algorithm. Meas. J. Int. Meas. Confed. 2018, 126, 22–36. [Google Scholar] [CrossRef]
  247. Teimouri, N.; Dyrmann, M.; Nielsen, P.; Mathiassen, S.; Somerville, G.; Jørgensen, R. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks. Sensors 2018, 18, 1580. [Google Scholar] [CrossRef] [Green Version]
  248. Akbarzadeh, S.; Paap, A.; Ahderom, S.; Apopei, B.; Alameh, K. Plant discrimination by Support Vector Machine classifier based on spectral reflectance. Comput. Electron. Agric. 2018, 148, 250–258. [Google Scholar] [CrossRef]
  249. Sharpe, S.M.; Schumann, A.W.; Boyd, N.S. Detection of Carolina Geranium (Geranium carolinianum) Growing in Competition with Strawberry Using Convolutional Neural Networks. Weed Sci. 2019, 67, 239–245. [Google Scholar] [CrossRef]
  250. Knoll, F.J.; Czymmek, V.; Harders, L.O.; Hussmann, S. Real-time classification of weeds in organic carrot production using deep learning algorithms. Comput. Electron. Agric. 2019, 167, 105097. [Google Scholar] [CrossRef]
  251. Kounalakis, T.; Triantafyllidis, G.A.; Nalpantidis, L. Deep learning-based visual recognition of rumex for robotic precision farming. Comput. Electron. Agric. 2019, 165, 104973. [Google Scholar] [CrossRef]
  252. Lambert, J.P.; Childs, D.Z.; Freckleton, R.P. Testing the ability of unmanned aerial systems and machine learning to map weeds at subfield scales: A test with the weed Alopecurus myosuroides (Huds). Pest Manag. Sci. 2019, 75, 2283–2294. [Google Scholar] [CrossRef]
  253. Ma, X.; Deng, X.; Qi, L.; Jiang, Y.; Li, H.; Wang, Y.; Xing, X. Fully convolutional network for rice seedling and weed image segmentation at the seedling stage in paddy fields. PLoS ONE 2019, 14, e0215676. [Google Scholar] [CrossRef]
  254. Olsen, A.; Konovalov, D.A.; Philippa, B.; Ridd, P.; Wood, J.C.; Johns, J.; Banks, W.; Girgenti, B.; Kenny, O.; Whinney, J.; et al. DeepWeeds: A multiclass weed species image dataset for deep learning. Sci. Rep. 2019, 9, 1–12. [Google Scholar] [CrossRef]
  255. Partel, V.; Charan Kakarla, S.; Ampatzidis, Y. Development and evaluation of a low-cost and smart technology for precision weed management utilizing artificial intelligence. Comput. Electron. Agric. 2019, 157, 339–350. [Google Scholar] [CrossRef]
  256. Rehman, T.U.; Zaman, Q.U.; Chang, Y.K.; Schumann, A.W.; Corscadden, K.W. Development and field evaluation of a machine vision based in-season weed detection system for wild blueberry. Comput. Electron. Agric. 2019, 162, 1–13. [Google Scholar] [CrossRef]
  257. Yu, J.; Sharpe, S.M.; Schumann, A.W.; Boyd, N.S. Deep learning for image-based weed detection in turfgrass. Eur. J. Agron. 2019, 104, 78–84. [Google Scholar] [CrossRef]
  258. Yu, J.; Schumann, A.W.; Cao, Z.; Sharpe, S.M.; Boyd, N.S. Weed Detection in Perennial Ryegrass with Deep Learning Convolutional Neural Network. Front. Plant Sci. 2019, 10, 1422. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  259. Lottes, P.; Behley, J.; Chebrolu, N.; Milioto, A.; Stachniss, C. Robust joint stem detection and crop-weed classification using image sequences for plant-specific treatment in precision farming. J. Field Robot. 2020, 37, 20–34. [Google Scholar] [CrossRef]
  260. Espejo-Garcia, B.; Mylonas, N.; Athanasakos, L.; Fountas, S.; Vasilakoglou, I. Towards weeds identification assistance through transfer learning. Comput. Electron. Agric. 2020, 171, 105306. [Google Scholar] [CrossRef]
  261. Dadashzadeh, M.; Abbaspour-Gilandeh, Y.; Mesri-Gundoshmian, T.; Sabzi, S.; Hernández-Hernández, J.L.; Hernández-Hernández, M.; Arribas, J.I. Weed Classification for Site-Specific Weed Management Using an Automated Stereo Computer-Vision Machine-Learning System in Rice Fields. Plants 2020, 9, 559. [Google Scholar] [CrossRef]
  262. Kamath, R.; Balachandra, M.; Prabhu, S. Paddy Crop and Weed Discrimination: A Multiple Classifier System Approach. Int. J. Agron. 2020, 2020. [Google Scholar] [CrossRef]
  263. Kamath, R.; Balachandra, M.; Prabhu, S. Crop and weed discrimination using Laws’ texture masks. Int. J. Agric. Biol. Eng. 2020, 13, 191–197. [Google Scholar] [CrossRef]
  264. Le, V.N.T.; Ahderom, S.; Alameh, K. Performances of the lbp based algorithm over cnn models for detecting crops and weeds with similar morphologies. Sensors 2020, 20, 2193. [Google Scholar] [CrossRef]
  265. Gao, J.; French, A.P.; Pound, M.P.; He, Y.; Pridmore, T.P.; Pieters, J.G. Deep convolutional neural networks for image-based Convolvulus sepium detection in sugar beet fields. Plant Methods 2020, 16, 29. [Google Scholar] [CrossRef] [Green Version]
  266. Hu, K.; Coleman, G.; Zeng, S.; Wang, Z.; Walsh, M. Graph weeds net: A graph-based deep learning method for weed recognition. Comput. Electron. Agric. 2020, 174, 105520. [Google Scholar] [CrossRef]
  267. Osorio, K.; Puerto, A.; Pedraza, C.; Jamaica, D.; Rodríguez, L. A Deep Learning Approach for Weed Detection in Lettuce Crops Using Multispectral Images. AgriEngineering 2020, 2, 471–488. [Google Scholar] [CrossRef]
  268. Espejo-Garcia, B.; Mylonas, N.; Athanasakos, L.; Fountas, S. Improving weeds identification with a repository of agricultural pre-trained deep neural networks. Comput. Electron. Agric. 2020, 175, 105593. [Google Scholar] [CrossRef]
  269. Veeranampalayam Sivakumar, A.N.; Li, J.; Scott, S.; Psota, E.; Jhala, A.J.; Luck, J.D.; Shi, Y. Comparison of Object Detection and Patch-Based Classification Deep Learning Models on Mid- to Late-Season Weed Detection in UAV Imagery. Remote Sens. 2020, 12, 2136. [Google Scholar] [CrossRef]
  270. Sharpe, S.M.; Schumann, A.W.; Boyd, N.S. Goosegrass Detection in Strawberry and Tomato Using a Convolutional Neural Network. Sci. Rep. 2020, 10, 1–8. [Google Scholar] [CrossRef]
  271. Sabzi, S.; Abbaspour-Gilandeh, Y.; Arribas, J.I. An automatic visible-range video weed detection, segmentation and classification prototype in potato field. Heliyon 2020, 6, e03685. [Google Scholar] [CrossRef]
  272. Shendryk, Y.; Rossiter-Rachor, N.A.; Setterfield, S.A.; Levick, S.R. Leveraging High-Resolution Satellite Imagery and Gradient Boosting for Invasive Weed Mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4443–4450. [Google Scholar] [CrossRef]
  273. Gašparović, M.; Zrinjski, M.; Barković, Đ.; Radočaj, D. An automatic method for weed mapping in oat fields based on UAV imagery. Comput. Electron. Agric. 2020, 173, 105385. [Google Scholar] [CrossRef]
  274. Sapkota, B.; Singh, V.; Neely, C.; Rajan, N.; Bagavathiannan, M. Detection of Italian Ryegrass in Wheat and Prediction of Competitive Interactions Using Remote-Sensing and Machine-Learning Techniques. Remote Sens. 2020, 12, 2977. [Google Scholar] [CrossRef]
  275. Ruigrok, T.; van Henten, E.; Booij, J.; van Boheemen, K.; Kootstra, G. Application-Specific Evaluation of a Weed-Detection Algorithm for Plant-Specific Spraying. Sensors 2020, 20, 7262. [Google Scholar] [CrossRef]
  276. Champ, J.; Mora-Fallas, A.; Goëau, H.; Mata-Montero, E.; Bonnet, P.; Joly, A. Instance segmentation for the fine detection of crop and weed plants by precision agricultural robots. Appl. Plant Sci. 2020, 8. [Google Scholar] [CrossRef] [PubMed]
  277. Petrich, L.; Lohrmann, G.; Neumann, M.; Martin, F.; Frey, A.; Stoll, A.; Schmidt, V. Detection of Colchicum autumnale in drone images, using a machine-learning approach. Precis. Agric. 2020, 21, 1291–1303. [Google Scholar] [CrossRef]
  278. Lam, O.H.Y.; Dogotari, M.; Prüm, M.; Vithlani, H.N.; Roers, C.; Melville, B.; Zimmer, F.; Becker, R. An open source workflow for weed mapping in native grassland using unmanned aerial vehicle: Using Rumex obtusifolius as a case study. Eur. J. Remote Sens. 2020, 1–18. [Google Scholar] [CrossRef]
  279. Abad, M.; Abkar, A.; Mojaradi, B. Effect of the Temporal Gradient of Vegetation Indices on Early-Season Wheat Classification Using the Random Forest Classifier. Appl. Sci. 2018, 8, 1216. [Google Scholar] [CrossRef] [Green Version]
  280. Ghazaryan, G.; Dubovyk, O.; Löw, F.; Lavreniuk, M.; Kolotii, A.; Schellberg, J.; Kussul, N. A rule-based approach for crop identification using multi-temporal and multi-sensor phenological metrics. Eur. J. Remote Sens. 2018, 51, 511–524. [Google Scholar] [CrossRef]
  281. Ji, S.; Zhang, C.; Xu, A.; Shi, Y.; Duan, Y. 3D Convolutional Neural Networks for Crop Classification with Multi-Temporal Remote Sensing Images. Remote Sens. 2018, 10, 75. [Google Scholar] [CrossRef] [Green Version]
  282. Nemmaoui, A.; Aguilar, M.A.; Aguilar, F.J.; Novelli, A.; García Lorca, A. Greenhouse Crop Identification from Multi-Temporal Multi-Sensor Satellite Imagery Using Object-Based Approach: A Case Study from Almería (Spain). Remote Sens. 2018, 10, 1751. [Google Scholar] [CrossRef] [Green Version]
  283. Xu, L.; Zhang, H.; Wang, C.; Zhang, B.; Liu, M. Crop Classification Based on Temporal Information Using Sentinel-1 SAR Time-Series Data. Remote Sens. 2018, 11, 53. [Google Scholar] [CrossRef] [Green Version]
  284. Kwak, G.-H.; Park, N.-W. Impact of Texture Information on Crop Classification with Machine Learning and UAV Images. Appl. Sci. 2019, 9, 643. [Google Scholar] [CrossRef] [Green Version]
  285. Paul, S.; Kumar, D.N. Evaluation of Feature Selection and Feature Extraction Techniques on Multi-Temporal Landsat-8 Images for Crop Classification. Remote Sens. Earth Syst. Sci. 2019, 2, 197–207. [Google Scholar] [CrossRef]
  286. Piedelobo, L.; Hernández-López, D.; Ballesteros, R.; Chakhar, A.; Del Pozo, S.; González-Aguilera, D.; Moreno, M.A. Scalable pixel-based crop classification combining Sentinel-2 and Landsat-8 data time series: Case study of the Duero river basin. Agric. Syst. 2019, 171, 36–50. [Google Scholar] [CrossRef]
  287. Song, Q.; Xiang, M.; Hovis, C.; Zhou, Q.; Lu, M.; Tang, H.; Wu, W. Object-based feature selection for crop classification using multi-temporal high-resolution imagery. Int. J. Remote Sens. 2019, 40, 2053–2068. [Google Scholar] [CrossRef]
  288. Sonobe, R. Parcel-Based Crop Classification Using Multi-Temporal TerraSAR-X Dual Polarimetric Data. Remote Sens. 2019, 11, 1148. [Google Scholar] [CrossRef] [Green Version]
  289. Sun, Y.; Luo, J.; Wu, T.; Zhou, Y.; Liu, H.; Gao, L.; Dong, W.; Liu, W.; Yang, Y.; Hu, X.; et al. Synchronous Response Analysis of Features for Remote Sensing Crop Classification Based on Optical and SAR Time-Series Data. Sensors 2019, 19, 4227. [Google Scholar] [CrossRef] [Green Version]
  290. Sun, C.; Bian, Y.; Zhou, T.; Pan, J. Using of Multi-Source and Multi-Temporal Remote Sensing Data Improves Crop-Type Mapping in the Subtropical Agriculture Region. Sensors 2019, 19, 2401. [Google Scholar] [CrossRef] [Green Version]
  291. Teimouri, N.; Dyrmann, M.; Jørgensen, R.N. A Novel Spatio-Temporal FCN-LSTM Network for Recognizing Various Crop Types Using Multi-Temporal Radar Images. Remote Sens. 2019, 11, 990. [Google Scholar] [CrossRef] [Green Version]
  292. Ustuner, M.; Balik Sanli, F. Polarimetric Target Decompositions and Light Gradient Boosting Machine for Crop Classification: A Comparative Evaluation. ISPRS Int. J. Geo Inform. 2019, 8, 97. [Google Scholar] [CrossRef] [Green Version]
  293. Wei, S.; Zhang, H.; Wang, C.; Wang, Y.; Xu, L. Multi-Temporal SAR Data Large-Scale Crop Mapping Based on U-Net Model. Remote Sens. 2019, 11, 68. [Google Scholar] [CrossRef] [Green Version]
  294. Zhao, H.; Chen, Z.; Jiang, H.; Jing, W.; Sun, L.; Feng, M. Evaluation of Three Deep Learning Models for Early Crop Classification Using Sentinel-1A Imagery Time Series—A Case Study in Zhanjiang, China. Remote Sens. 2019, 11, 2673. [Google Scholar] [CrossRef] [Green Version]
  295. Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
  296. Zhou, Y.; Luo, J.; Feng, L.; Zhou, X. DCN-Based Spatial Features for Improving Parcel-Based Crop Classification Using High-Resolution Optical Images and Multi-Temporal SAR Data. Remote Sens. 2019, 11, 1619. [Google Scholar] [CrossRef] [Green Version]
  297. Zhou, Y.; Luo, J.; Feng, L.; Yang, Y.; Chen, Y.; Wu, W. Long-short-term-memory-based crop classification using high-resolution optical images and multi-temporal SAR data. GIScience Remote Sens. 2019, 56, 1170–1191. [Google Scholar] [CrossRef]
  298. Mazzia, V.; Khaliq, A.; Chiaberge, M. Improvement in Land Cover and Crop Classification based on Temporal Features Learning from Sentinel-2 Data Using Recurrent-Convolutional Neural Network (R-CNN). Appl. Sci. 2019, 10, 238. [Google Scholar] [CrossRef] [Green Version]
  299. Nguyen Thanh Le, V.; Apopei, B.; Alameh, K. Effective plant discrimination based on the combination of local binary pattern operators and multiclass support vector machine methods. Inf. Process. Agric. 2019, 6, 116–131. [Google Scholar] [CrossRef]
  300. Cinar, I. Classification of Rice Varieties Using Artificial Intelligence Methods. Int. J. Intell. Syst. Appl. Eng. 2019, 7, 188–194. [Google Scholar] [CrossRef] [Green Version]
  301. Tan, K.; Wang, R.; Li, M.; Gong, Z. Discriminating soybean seed varieties using hyperspectral imaging and machine learning. J. Comput. Methods Sci. Eng. 2019, 19, 1001–1015. [Google Scholar] [CrossRef]
  302. Zhu, S.; Zhou, L.; Gao, P.; Bao, Y.; He, Y.; Feng, L. Near-Infrared Hyperspectral Imaging Combined with Deep Learning to Identify Cotton Seed Varieties. Molecules 2019, 24, 3268. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  303. Bisen, D. Deep convolutional neural network based plant species recognition through features of leaf. Multimed. Tools Appl. 2020, 80, 6443–6456. [Google Scholar] [CrossRef]
  304. Bambil, D.; Pistori, H.; Bao, F.; Weber, V.; Alves, F.M.; Gonçalves, E.G.; de Alencar Figueiredo, L.F.; Abreu, U.G.P.; Arruda, R.; Bortolotto, I.M. Plant species identification using color learning resources, shape, texture, through machine learning and artificial neural networks. Environ. Syst. Decis. 2020, 40, 480–484. [Google Scholar] [CrossRef]
  305. Huixian, J. The Analysis of Plants Image Recognition Based on Deep Learning and Artificial Neural Network. IEEE Access 2020, 8, 68828–68841. [Google Scholar] [CrossRef]
  306. Shelestov, A.; Lavreniuk, M.; Vasiliev, V.; Shumilo, L.; Kolotii, A.; Yailymov, B.; Kussul, N.; Yailymova, H. Cloud Approach to Automated Crop Classification Using Sentinel-1 Imagery. IEEE Trans. Big Data 2020, 6, 572–582. [Google Scholar] [CrossRef]
  307. Ji, S.; Zhang, Z.; Zhang, C.; Wei, S.; Lu, M.; Duan, Y. Learning discriminative spatiotemporal features for precise crop classification from multi-temporal satellite images. Int. J. Remote Sens. 2020, 41, 3162–3174. [Google Scholar] [CrossRef]
  308. Bhuyar, N. Crop Classification with Multi-Temporal Satellite Image Data. Int. J. Eng. Res. 2020, V9. [Google Scholar] [CrossRef]
  309. Zhang, H.; Kang, J.; Xu, X.; Zhang, L. Accessing the temporal and spectral features in crop type mapping using multi-temporal Sentinel-2 imagery: A case study of Yi’an County, Heilongjiang province, China. Comput. Electron. Agric. 2020, 176, 105618. [Google Scholar] [CrossRef]
  310. Kyere, I.; Astor, T.; Graß, R.; Wachendorf, M. Agricultural crop discrimination in a heterogeneous low-mountain range region based on multi-temporal and multi-sensor satellite data. Comput. Electron. Agric. 2020, 179, 105864. [Google Scholar] [CrossRef]
  311. Xu, J.; Zhu, Y.; Zhong, R.; Lin, Z.; Xu, J.; Jiang, H.; Huang, J.; Li, H.; Lin, T. DeepCropMapping: A multi-temporal deep learning approach with improved spatial generalizability for dynamic corn and soybean mapping. Remote Sens. Environ. 2020, 247, 111946. [Google Scholar] [CrossRef]
  312. Liao, C.; Wang, J.; Xie, Q.; Al Baz, A.; Huang, X.; Shang, J.; He, Y. Synergistic Use of Multi-Temporal RADARSAT-2 and VENµS Data for Crop Classification Based on 1D Convolutional Neural Network. Remote Sens. 2020, 12, 832. [Google Scholar] [CrossRef] [Green Version]
  313. Zhang, W.; Liu, H.; Wu, W.; Zhan, L.; Wei, J. Mapping Rice Paddy Based on Machine Learning with Sentinel-2 Multi-Temporal Data: Model Comparison and Transferability. Remote Sens. 2020, 12, 1620. [Google Scholar] [CrossRef]
  314. Yi, Z.; Jia, L.; Chen, Q. Crop Classification Using Multi-Temporal Sentinel-2 Data in the Shiyang River Basin of China. Remote Sens. 2020, 12, 4052. [Google Scholar] [CrossRef]
  315. Guo, J.; Li, H.; Ning, J.; Han, W.; Zhang, W.; Zhou, Z.-S. Feature Dimension Reduction Using Stacked Sparse Auto-Encoders for Crop Classification with Multi-Temporal, Quad-Pol SAR Data. Remote Sens. 2020, 12, 321. [Google Scholar] [CrossRef] [Green Version]
  316. Maponya, M.G.; van Niekerk, A.; Mashimbye, Z.E. Pre-harvest classification of crop types using a Sentinel-2 time-series and machine learning. Comput. Electron. Agric. 2020, 169, 105164. [Google Scholar] [CrossRef]
  317. Minallah, N.; Tariq, M.; Aziz, N.; Khan, W.; Rehman, A.U.; Belhaouari, S.B. On the performance of fusion based planet-scope and Sentinel-2 data for crop classification using inception inspired deep convolutional neural network. PLoS ONE 2020, 15, e0239746. [Google Scholar] [CrossRef]
  318. Chakhar, A.; Ortega-Terol, D.; Hernández-López, D.; Ballesteros, R.; Ortega, J.F.; Moreno, M.A. Assessing the Accuracy of Multiple Classification Algorithms for Crop Classification Using Landsat-8 and Sentinel-2 Data. Remote Sens. 2020, 12, 1735. [Google Scholar] [CrossRef]
  319. Mandal, D.; Kumar, V.; Rao, Y.S. An assessment of temporal RADARSAT-2 SAR data for crop classification using KPCA based support vector machine. Geocarto Int. 2020. [Google Scholar] [CrossRef]
  320. Kobayashi, N.; Tani, H.; Wang, X.; Sonobe, R. Crop classification using spectral indices derived from Sentinel-2A imagery. J. Inf. Telecommun. 2020, 4, 67–90. [Google Scholar] [CrossRef]
  321. Tu, K.-L.; Li, L.-J.; Yang, L.-M.; Wang, J.-H.; Sun, Q. Selection for high quality pepper seeds by machine vision and classifiers. J. Integr. Agric. 2018, 17, 1999–2006. [Google Scholar] [CrossRef]
  322. Wolanin, A.; Camps-Valls, G.; Gómez-Chova, L.; Mateo-García, G.; van der Tol, C.; Zhang, Y.; Guanter, L. Estimating crop primary productivity with Sentinel-2 and Landsat 8 using machine learning methods trained with radiative transfer simulations. Remote Sens. Environ. 2019, 225, 441–457. [Google Scholar] [CrossRef]
  323. Yang, B.; Wang, M.; Sha, Z.; Wang, B.; Chen, J.; Yao, X.; Cheng, T.; Cao, W.; Zhu, Y. Evaluation of Aboveground Nitrogen Content of Winter Wheat Using Digital Imagery of Unmanned Aerial Vehicles. Sensors 2019, 19, 4416. [Google Scholar] [CrossRef] [Green Version]
  324. Genze, N.; Bharti, R.; Grieb, M.; Schultheiss, S.J.; Grimm, D.G. Accurate machine learning-based germination detection, prediction and quality assessment of three grain crops. Plant Methods 2020, 16, 157. [Google Scholar] [CrossRef]
  325. De Medeiros, A.D.; Pinheiro, D.T.; Xavier, W.A.; da Silva, L.J.; dos Santos Dias, D.C.F. Quality classification of Jatropha curcas seeds using radiographic images and machine learning. Ind. Crop. Prod. 2020, 146, 112162. [Google Scholar] [CrossRef]
  326. Baath, G.S.; Baath, H.K.; Gowda, P.H.; Thomas, J.P.; Northup, B.K.; Rao, S.C.; Singh, H. Predicting Forage Quality of Warm-Season Legumes by Near Infrared Spectroscopy Coupled with Machine Learning Techniques. Sensors 2020, 20, 867. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  327. Medeiros, A.D.D.; Silva, L.J.D.; Ribeiro, J.P.O.; Ferreira, K.C.; Rosas, J.T.F.; Santos, A.A.; Silva, C.B.D. Machine Learning for Seed Quality Classification: An Advanced Approach Using Merger Data from FT-NIR Spectroscopy and X-ray Imaging. Sensors 2020, 20, 4319. [Google Scholar] [CrossRef] [PubMed]
  328. Lee, J.; Nazki, H.; Baek, J.; Hong, Y.; Lee, M. Artificial Intelligence Approach for Tomato Detection and Mass Estimation in Precision Agriculture. Sustainability 2020, 12, 9138. [Google Scholar] [CrossRef]
  329. Yang, B.; Gao, Y.; Yan, Q.; Qi, L.; Zhu, Y.; Wang, B. Estimation Method of Soluble Solid Content in Peach Based on Deep Features of Hyperspectral Imagery. Sensors 2020, 20, 5021. [Google Scholar] [CrossRef]
  330. Gutiérrez, S.; Diago, M.P.; Fernández-Novales, J.; Tardaguila, J. Vineyard water status assessment using on-the-go thermal imaging and machine learning. PLoS ONE 2018, 13, e0192037. [Google Scholar] [CrossRef]
  331. Loggenberg, K.; Strever, A.; Greyling, B.; Poona, N. Modelling Water Stress in a Shiraz Vineyard Using Hyperspectral Imaging and Machine Learning. Remote Sens. 2018, 10, 202. [Google Scholar] [CrossRef] [Green Version]
  332. Zhang, J.; Zhu, Y.; Zhang, X.; Ye, M.; Yang, J. Developing a Long Short-Term Memory (LSTM) based model for predicting water table depth in agricultural areas. J. Hydrol. 2018, 561, 918–929. [Google Scholar] [CrossRef]
  333. Goldstein, A.; Fink, L.; Meitin, A.; Bohadana, S.; Lutenberg, O.; Ravid, G. Applying machine learning on sensor data for irrigation recommendations: Revealing the agronomist’s tacit knowledge. Precis. Agric. 2018, 19, 421–444. [Google Scholar] [CrossRef]
  334. Romero, M.; Luo, Y.; Su, B.; Fuentes, S. Vineyard water status estimation using multispectral imagery from an UAV platform and machine learning algorithms for irrigation scheduling management. Comput. Electron. Agric. 2018, 147, 109–117. [Google Scholar] [CrossRef]
  335. Kisi, O.; Alizamir, M. Modelling reference evapotranspiration using a new wavelet conjunction heuristic method: Wavelet extreme learning machine vs wavelet neural networks. Agric. For. Meteorol. 2018, 263, 41–48. [Google Scholar] [CrossRef]
  336. Fan, J.; Yue, W.; Wu, L.; Zhang, F.; Cai, H.; Wang, X.; Lu, X.; Xiang, Y. Evaluation of SVM, ELM and four tree-based ensemble models for predicting daily reference evapotranspiration using limited meteorological data in different climates of China. Agric. For. Meteorol. 2018, 263, 225–241. [Google Scholar] [CrossRef]
  337. Adeyemi, O.; Grove, I.; Peets, S.; Domun, Y.; Norton, T. Dynamic Neural Network Modelling of Soil Moisture Content for Predictive Irrigation Scheduling. Sensors 2018, 18, 3408. [Google Scholar] [CrossRef] [Green Version]
  338. Angelaki, A.; Singh Nain, S.; Singh, V.; Sihag, P. Estimation of models for cumulative infiltration of soil using machine learning methods. ISH J. Hydraul. Eng. 2018. [Google Scholar] [CrossRef]
  339. Goap, A.; Sharma, D.; Shukla, A.K.; Rama Krishna, C. An IoT based smart irrigation management system using Machine learning and open source technologies. Comput. Electron. Agric. 2018, 155, 41–49. [Google Scholar] [CrossRef]
  340. Prasad, R.; Deo, R.C.; Li, Y.; Maraseni, T. Soil moisture forecasting by a hybrid machine learning technique: ELM integrated with ensemble empirical mode decomposition. Geoderma 2018, 330, 136–161. [Google Scholar] [CrossRef]
  341. Tang, D.; Feng, Y.; Gong, D.; Hao, W.; Cui, N. Evaluation of artificial intelligence models for actual crop evapotranspiration modeling in mulched and non-mulched maize croplands. Comput. Electron. Agric. 2018, 152, 375–384. [Google Scholar] [CrossRef]
  342. Sihag, P.; Singh, V.P.; Angelaki, A.; Kumar, V.; Sepahvand, A.; Golia, E. Modelling of infiltration using artificial intelligence techniques in semi-arid Iran. Hydrol. Sci. J. 2019, 64, 1647–1658. [Google Scholar] [CrossRef]
  343. Chen, H.; Chen, A.; Xu, L.; Xie, H.; Qiao, H.; Lin, Q.; Cai, K. A deep learning CNN architecture applied in smart near-infrared analysis of water pollution for agricultural irrigation resources. Agric. Water Manag. 2020, 240, 106303. [Google Scholar] [CrossRef]
  344. Afzaal, H.; Farooque, A.A.; Abbas, F.; Acharya, B.; Esau, T. Computation of Evapotranspiration with Artificial Intelligence for Precision Water Resource Management. Appl. Sci. 2020, 10, 1621. [Google Scholar] [CrossRef] [Green Version]
  345. Li, P.; Zha, Y.; Shi, L.; Tso, C.H.M.; Zhang, Y.; Zeng, W. Comparison of the use of a physical-based model with data assimilation and machine learning methods for simulating soil water dynamics. J. Hydrol. 2020, 584, 124692. [Google Scholar] [CrossRef]
  346. Fernández-López, A.; Marín-Sánchez, D.; García-Mateos, G.; Ruiz-Canales, A.; Ferrández-Villena-García, M.; Molina-Martínez, J.M. A Machine Learning Method to Estimate Reference Evapotranspiration Using Soil Moisture Sensors. Appl. Sci. 2020, 10, 1912. [Google Scholar] [CrossRef] [Green Version]
  347. Xavier, L.C.P.; Carvalho, T.M.N.; Pontes Filho, J.D.; Souza Filho, F.D.A.; Silva, S.M.O.D. Use of Machine Learning in Evaluation of Drought Perception in Irrigated Agriculture: The Case of an Irrigated Perimeter in Brazil. Water 2020, 12, 1546. [Google Scholar] [CrossRef]
  348. Yamaç, S.S.; Todorovic, M. Estimation of daily potato crop evapotranspiration using three different machine learning algorithms and four scenarios of available meteorological data. Agric. Water Manag. 2020, 228, 105875. [Google Scholar] [CrossRef]
  349. Mosavi, A.; Sajedi-Hosseini, F.; Choubin, B.; Taromideh, F.; Rahi, G.; Dineva, A. Susceptibility Mapping of Soil Water Erosion Using Machine Learning Models. Water 2020, 12, 1995. [Google Scholar] [CrossRef]
  350. Fung, K.F.; Huang, Y.F.; Koo, C.H.; Mirzaei, M. Improved svr machine learning models for agricultural drought prediction at downstream of langat river basin, Malaysia. J. Water Clim. Chang. 2020, 11, 1383–1398. [Google Scholar] [CrossRef]
  351. Ferreira, L.B.; da Cunha, F.F. New approach to estimate daily reference evapotranspiration based on hourly temperature and relative humidity using machine learning and deep learning. Agric. Water Manag. 2020, 234, 106113. [Google Scholar] [CrossRef]
  352. Zhu, B.; Feng, Y.; Gong, D.; Jiang, S.; Zhao, L.; Cui, N. Hybrid particle swarm optimization with extreme learning machine for daily reference evapotranspiration prediction from limited climatic data. Comput. Electron. Agric. 2020, 173, 105430. [Google Scholar] [CrossRef]
  353. Yaseen, Z.M.; Al-Juboori, A.M.; Beyaztas, U.; Al-Ansari, N.; Chau, K.-W.; Qi, C.; Ali, M.; Salih, S.Q.; Shahid, S. Prediction of evaporation in arid and semi-arid regions: A comparative study using different machine learning models. Eng. Appl. Comput. Fluid Mech. 2020, 14, 70–89. [Google Scholar] [CrossRef] [Green Version]
  354. Wu, L.; Huang, G.; Fan, J.; Ma, X.; Zhou, H.; Zeng, W. Hybrid extreme learning machine with meta-heuristic algorithms for monthly pan evaporation prediction. Comput. Electron. Agric. 2020, 168, 105115. [Google Scholar] [CrossRef]
  355. Dos Santos Farias, D.B.; Althoff, D.; Rodrigues, L.N.; Filgueiras, R. Performance evaluation of numerical and machine learning methods in estimating reference evapotranspiration in a Brazilian agricultural frontier. Theor. Appl. Climatol. 2020, 142, 1481–1492. [Google Scholar] [CrossRef]
  356. Raza, A.; Shoaib, M.; Faiz, M.A.; Baig, F.; Khan, M.M.; Ullah, M.K.; Zubair, M. Comparative Assessment of Reference Evapotranspiration Estimation Using Conventional Method and Machine Learning Algorithms in Four Climatic Regions. Pure Appl. Geophys. 2020, 177, 4479–4508. [Google Scholar] [CrossRef]
  357. Tufaner, F.; Özbeyaz, A. Estimation and easy calculation of the Palmer Drought Severity Index from the meteorological data by using the advanced machine learning algorithms. Environ. Monit. Assess. 2020, 192, 576. [Google Scholar] [CrossRef] [PubMed]
  358. Sagan, V.; Peterson, K.T.; Maimaitijiang, M.; Sidike, P.; Sloan, J.; Greeling, B.A.; Maalouf, S.; Adams, C. Monitoring inland water quality using remote sensing: Potential and limitations of spectral indices, bio-optical simulations, machine learning, and cloud computing. Earth Sci. Rev. 2020, 205, 103187. [Google Scholar] [CrossRef]
  359. Lee, S.; Hyun, Y.; Lee, S.; Lee, M.-J. Groundwater Potential Mapping Using Remote Sensing and GIS-Based Machine Learning Techniques. Remote Sens. 2020, 12, 1200. [Google Scholar] [CrossRef] [Green Version]
  360. Majumdar, S.; Smith, R.; Butler, J.J.; Lakshmi, V. Groundwater Withdrawal Prediction Using Integrated Multitemporal Remote Sensing Data Sets and Machine Learning. Water Resour. Res. 2020, 56, e2020WR028059. [Google Scholar] [CrossRef]
  361. Band, S.S.; Janizadeh, S.; Pal, S.C.; Chowdhuri, I.; Siabi, Z.; Norouzi, A.; Melesse, A.M.; Shokri, M.; Mosavi, A. Comparative Analysis of Artificial Intelligence Models for Accurate Estimation of Groundwater Nitrate Concentration. Sensors 2020, 20, 5763. [Google Scholar] [CrossRef]
  362. Hong, Y.; Chen, S.; Zhang, Y.; Chen, Y.; Yu, L.; Liu, Y.; Liu, Y.; Cheng, H.; Liu, Y. Rapid identification of soil organic matter level via visible and near-infrared spectroscopy: Effects of two-dimensional correlation coefficient and extreme learning machine. Sci. Total Environ. 2018, 644, 1232–1243. [Google Scholar] [CrossRef]
  363. Jha, S.K.; Ahmad, Z. Soil microbial dynamics prediction using machine learning regression methods. Comput. Electron. Agric. 2018, 147, 158–165. [Google Scholar] [CrossRef]
  364. Wang, X.; Zhang, F.; Ding, J.; Kung, H.T.; Latif, A.; Johnson, V.C. Estimation of soil salt content (SSC) in the Ebinur Lake Wetland National Nature Reserve (ELWNNR), Northwest China, based on a Bootstrap-BP neural network model and optimal spectral indices. Sci. Total Environ. 2018, 615, 918–930. [Google Scholar] [CrossRef]
  365. Zeraatpisheh, M.; Ayoubi, S.; Jafari, A.; Tajik, S.; Finke, P. Digital mapping of soil properties using multiple machine learning in a semi-arid region, central Iran. Geoderma 2019, 338, 445–452. [Google Scholar] [CrossRef]
  366. Chen, D.; Chang, N.; Xiao, J.; Zhou, Q.; Wu, W. Mapping dynamics of soil organic matter in croplands with MODIS data and machine learning algorithms. Sci. Total Environ. 2019, 669, 844–855. [Google Scholar] [CrossRef]
  367. Wu, T.; Luo, J.; Dong, W.; Sun, Y.; Xia, L.; Zhang, X. Geo-Object-Based Soil Organic Matter Mapping Using Machine Learning Algorithms with Multi-Source Geo-Spatial Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1091–1106. [Google Scholar] [CrossRef]
  368. Ghorbani, M.A.; Deo, R.C.; Kashani, M.H.; Shahabi, M.; Ghorbani, S. Artificial intelligence-based fast and efficient hybrid approach for spatial modelling of soil electrical conductivity. Soil Tillage Res. 2019, 186, 152–164. [Google Scholar] [CrossRef]
  369. Ge, X.; Wang, J.; Ding, J.; Cao, X.; Zhang, Z.; Liu, J.; Li, X. Combining UAV-based hyperspectral imagery and machine learning algorithms for soil moisture content monitoring. PeerJ 2019, 7, e6926. [Google Scholar] [CrossRef]
  370. Feng, Y.; Cui, N.; Hao, W.; Gao, L.; Gong, D. Estimation of soil temperature from meteorological data using different machine learning models. Geoderma 2019, 338, 67–77. [Google Scholar] [CrossRef]
  371. Nawar, S.; Mouazen, A.M. On-line vis-NIR spectroscopy prediction of soil organic carbon using machine learning. Soil Tillage Res. 2019, 190, 120–127. [Google Scholar] [CrossRef]
  372. Ng, W.; Minasny, B.; Montazerolghaem, M.; Padarian, J.; Ferguson, R.; Bailey, S.; McBratney, A.B. Convolutional neural network for simultaneous prediction of several soil properties using visible/near-infrared, mid-infrared, and their combined spectra. Geoderma 2019, 352, 251–267. [Google Scholar] [CrossRef]
  373. Padarian, J.; Minasny, B.; McBratney, A.B. Using deep learning to predict soil properties from regional spectral data. Geoderma Reg. 2019, 16, e00198. [Google Scholar] [CrossRef]
  374. Mohapatra, A.G.; Lenka, S.K.; Keswani, B. Neural Network and Fuzzy Logic Based Smart DSS Model for Irrigation Notification and Control in Precision Agriculture. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2019, 89, 67–76. [Google Scholar] [CrossRef]
  375. Bashir, R.N.; Bajwa, I.S.; Shahid, M.M.A. Internet of Things and Machine-Learning-Based Leaching Requirements Estimation for Saline Soils. IEEE Internet Things J. 2020, 7, 4464–4472. [Google Scholar] [CrossRef]
  376. Chakrabortty, R.; Pal, S.C.; Sahana, M.; Mondal, A.; Dou, J.; Pham, B.T.; Yunus, A.P. Soil erosion potential hotspot zone identification using machine learning and statistical approaches in eastern India. Nat. Hazards 2020, 1–36. [Google Scholar] [CrossRef]
  377. Helfer, G.A.; Victória Barbosa, J.L.; dos Santos, R.; da Costa, A. Ben A computational model for soil fertility prediction in ubiquitous agriculture. Comput. Electron. Agric. 2020, 175, 105602. [Google Scholar] [CrossRef]
  378. Araya, S.; Fryjoff-Hung, A.; Anderson, A.; Viers, J.; Ghezzehei, T. Advances in Soil Moisture Retrieval from Multispectral Remote Sensing Using Unmanned Aircraft Systems and Machine Learning Techniques. Hydrol. Earth Syst. Sci. Discuss. 2020, 1–33. [Google Scholar] [CrossRef]
  379. Yamaç, S.S.; Şeker, C.; Negiş, H. Evaluation of machine learning methods to predict soil moisture constants with different combinations of soil input data for calcareous soils in a semi arid area. Agric. Water Manag. 2020, 234, 106121. [Google Scholar] [CrossRef]
  380. Alizamir, M.; Kisi, O.; Ahmed, A.N.; Mert, C.; Fai, C.M.; Kim, S.; Kim, N.W.; El-Shafie, A. Advanced machine learning model for better prediction accuracy of soil temperature at different depths. PLoS ONE 2020, 15, e0231055. [Google Scholar] [CrossRef] [Green Version]
  381. Sanuade, O.A.; Hassan, A.M.; Akanji, A.O.; Olaojo, A.A.; Oladunjoye, M.A.; Abdulraheem, A. New empirical equation to estimate the soil moisture content based on thermal properties using machine learning techniques. Arab. J. Geosci. 2020, 13, 377. [Google Scholar] [CrossRef]
  382. Lei, X.; Chen, W.; Avand, M.; Janizadeh, S.; Kariminejad, N.; Shahabi, H.; Costache, R.; Shahabi, H.; Shirzadi, A.; Mosavi, A. GIS-Based Machine Learning Algorithms for Gully Erosion Susceptibility Mapping in a Semi-Arid Region of Iran. Remote Sens. 2020, 12, 2478. [Google Scholar] [CrossRef]
  383. Mosavi, A.; Hosseini, F.S.; Choubin, B.; Goodarzi, M.; Dineva, A.A. Groundwater Salinity Susceptibility Mapping Using Classifier Ensemble and Bayesian Machine Learning Models. IEEE Access 2020, 8, 145564–145576. [Google Scholar] [CrossRef]
  384. Hu, B.; Xue, J.; Zhou, Y.; Shao, S.; Fu, Z.; Li, Y.; Chen, S.; Qi, L.; Shi, Z. Modelling bioaccumulation of heavy metals in soil-crop ecosystems and identifying its controlling factors using machine learning. Environ. Pollut. 2020, 262, 114308. [Google Scholar] [CrossRef]
  385. Taghizadeh-Mehrjardi, R.; Nabiollahi, K.; Rasoli, L.; Kerry, R.; Scholten, T. Land Suitability Assessment and Agricultural Production Sustainability Using Machine Learning Models. Agronomy 2020, 10, 573. [Google Scholar] [CrossRef]
  386. JOHN, K.; Abraham Isong, I.; Michael Kebonye, N.; Okon Ayito, E.; Chapman Agyeman, P.; Marcus Afu, S. Using Machine Learning Algorithms to Estimate Soil Organic Carbon Variability with Environmental Variables and Soil Nutrient Indicators in an Alluvial Soil. Land 2020, 9, 487. [Google Scholar] [CrossRef]
  387. Benke, K.K.; Norng, S.; Robinson, N.J.; Chia, K.; Rees, D.B.; Hopley, J. Development of pedotransfer functions by machine learning for prediction of soil electrical conductivity and organic carbon content. Geoderma 2020, 366, 114210. [Google Scholar] [CrossRef]
  388. Rentschler, T.; Werban, U.; Ahner, M.; Behrens, T.; Gries, P.; Scholten, T.; Teuber, S.; Schmidt, K. 3D mapping of soil organic carbon content and soil moisture with multiple geophysical sensors and machine learning. Vadose Zone J. 2020, 19, e20062. [Google Scholar] [CrossRef]
  389. Rivera, J.I.; Bonilla, C.A. Predicting soil aggregate stability using readily available soil properties and machine learning techniques. Catena 2020, 187, 104408. [Google Scholar] [CrossRef]
  390. Mahmoudzadeh, H.; Matinfar, H.R.; Taghizadeh-Mehrjardi, R.; Kerry, R. Spatial prediction of soil organic carbon using machine learning techniques in western Iran. Geoderma Reg. 2020, 21, e00260. [Google Scholar] [CrossRef]
  391. Adab, H.; Morbidelli, R.; Saltalippi, C.; Moradian, M.; Ghalhari, G.A. Machine Learning to Estimate Surface Soil Moisture from Remote Sensing Data. Water 2020, 12, 3223. [Google Scholar] [CrossRef]
  392. Emadi, M.; Taghizadeh-Mehrjardi, R.; Cherati, A.; Danesh, M.; Mosavi, A.; Scholten, T. Predicting and Mapping of Soil Organic Carbon Using Machine Learning Algorithms in Northern Iran. Remote Sens. 2020, 12, 2234. [Google Scholar] [CrossRef]
  393. Arabameri, A.; Chen, W.; Loche, M.; Zhao, X.; Li, Y.; Lombardo, L.; Cerda, A.; Pradhan, B.; Bui, D.T. Comparison of machine learning models for gully erosion susceptibility mapping. Geosci. Front. 2020, 11, 1609–1620. [Google Scholar] [CrossRef]
  394. Phinzi, K.; Abriha, D.; Bertalan, L.; Holb, I.; Szabó, S. Machine Learning for Gully Feature Extraction Based on a Pan-Sharpened Multispectral Image: Multiclass vs. Binary Approach. ISPRS Int. J. Geo-Inf. 2020, 9, 252. [Google Scholar] [CrossRef] [Green Version]
  395. Du Plessis, C.; van Zijl, G.; Van Tol, J.; Manyevere, A. Machine learning digital soil mapping to inform gully erosion mitigation measures in the Eastern Cape, South Africa. Geoderma 2020, 368, 114287. [Google Scholar] [CrossRef]
  396. D’Eath, R.B.; Jack, M.; Futro, A.; Talbot, D.; Zhu, Q.; Barclay, D.; Baxter, E.M. Automatic early warning of tail biting in pigs: 3D cameras can detect lowered tail posture before an outbreak. PLoS ONE 2018, 13, e0194524. [Google Scholar] [CrossRef] [Green Version]
  397. Mansbridge, N.; Mitsch, J.; Bollard, N.; Ellis, K.; Miguel-Pacheco, G.; Dottorini, T.; Kaler, J. Feature Selection and Comparison of Machine Learning Algorithms in Classification of Grazing and Rumination Behaviour in Sheep. Sensors 2018, 18, 3532. [Google Scholar] [CrossRef] [Green Version]
  398. Walton, E.; Casey, C.; Mitsch, J.; Vázquez-Diosdado, J.A.; Yan, J.; Dottorini, T.; Ellis, K.A.; Winterlich, A.; Kaler, J. Evaluation of sampling frequency, window size and sensor position for classification of sheep behaviour. R. Soc. Open Sci. 2018, 5, 171442. [Google Scholar] [CrossRef] [Green Version]
  399. Yang, Q.; Xiao, D.; Lin, S. Feeding behavior recognition for group-housed pigs with the Faster R-CNN. Comput. Electron. Agric. 2018, 155, 453–460. [Google Scholar] [CrossRef]
  400. Zheng, C.; Zhu, X.; Yang, X.; Wang, L.; Tu, S.; Xue, Y. Automatic recognition of lactating sow postures from depth images by deep learning detector. Comput. Electron. Agric. 2018, 147, 51–63. [Google Scholar] [CrossRef]
  401. Bishop, J.C.; Falzon, G.; Trotter, M.; Kwan, P.; Meek, P.D. Livestock vocalisation classification in farm soundscapes. Comput. Electron. Agric. 2019, 162, 531–542. [Google Scholar] [CrossRef]
  402. Hamilton, A.; Davison, C.; Tachtatzis, C.; Andonovic, I.; Michie, C.; Ferguson, H.; Somerville, L.; Jonsson, N. Identification of the Rumination in Cattle Using Support Vector Machines with Motion-Sensitive Bolus Sensors. Sensors 2019, 19, 1165. [Google Scholar] [CrossRef] [Green Version]
  403. Fogarty, E.S.; Swain, D.L.; Cronin, G.M.; Moraes, L.E.; Trotter, M. Behaviour classification of extensively grazed sheep using machine learning. Comput. Electron. Agric. 2020, 169, 105175. [Google Scholar] [CrossRef]
  404. Rao, Y.; Jiang, M.; Wang, W.; Zhang, W.; Wang, R. On-farm welfare monitoring system for goats based on Internet of Things and machine learning. Int. J. Distrib. Sens. Netw. 2020, 16, 155014772094403. [Google Scholar] [CrossRef]
  405. Xu, B.; Wang, W.; Falzon, G.; Kwan, P.; Guo, L.; Sun, Z.; Li, C. Livestock classification and counting in quadcopter aerial images using Mask R-CNN. Int. J. Remote Sens. 2020, 41, 8121–8142. [Google Scholar] [CrossRef] [Green Version]
  406. Riaboff, L.; Poggi, S.; Madouasse, A.; Couvreur, S.; Aubin, S.; Bédère, N.; Goumand, E.; Chauvin, A.; Plantier, G. Development of a methodological framework for a robust prediction of the main behaviours of dairy cows using a combination of machine learning algorithms on accelerometer data. Comput. Electron. Agric. 2020, 169, 105179. [Google Scholar] [CrossRef]
  407. Taneja, M.; Byabazaire, J.; Jalodia, N.; Davy, A.; Olariu, C.; Malone, P. Machine learning based fog computing assisted data-driven approach for early lameness detection in dairy cattle. Comput. Electron. Agric. 2020, 171, 105286. [Google Scholar] [CrossRef]
  408. Gorczyca, M.T.; Gebremedhin, K.G. Ranking of environmental heat stressors for dairy cows using machine learning algorithms. Comput. Electron. Agric. 2020, 168, 105124. [Google Scholar] [CrossRef]
  409. Fu, Q.; Shen, W.; Wei, X.; Zhang, Y.; Xin, H.; Su, Z.; Zhao, C. Prediction of the diet energy digestion using kernel extreme learning machine: A case study with Holstein dry cows. Comput. Electron. Agric. 2020, 169, 105231. [Google Scholar] [CrossRef]
  410. Warner, D.; Vasseur, E.; Lefebvre, D.M.; Lacroix, R. A machine learning based decision aid for lameness in dairy herds using farm-based records. Comput. Electron. Agric. 2020, 169, 105193. [Google Scholar] [CrossRef]
  411. Borgonovo, F.; Ferrante, V.; Grilli, G.; Pascuzzo, R.; Vantini, S.; Guarino, M. A Data-Driven Prediction Method for an Early Warning of Coccidiosis in Intensive Livestock Systems: A Preliminary Study. Animals 2020, 10, 747. [Google Scholar] [CrossRef]
  412. Hyde, R.M.; Down, P.M.; Bradley, A.J.; Breen, J.E.; Hudson, C.; Leach, K.A.; Green, M.J. Automated prediction of mastitis infection patterns in dairy herds using machine learning. Sci. Rep. 2020, 10, 4289. [Google Scholar] [CrossRef] [Green Version]
  413. Wang, J.; Bell, M.; Liu, X.; Liu, G. Machine-Learning Techniques Can Enhance Dairy Cow Estrus Detection Using Location and Acceleration Data. Animals 2020, 10, 1160. [Google Scholar] [CrossRef]
  414. Denholm, S.J.; Brand, W.; Mitchell, A.P.; Wells, A.T.; Krzyzelewski, T.; Smith, S.L.; Wall, E.; Coffey, M.P. Predicting bovine tuberculosis status of dairy cows from mid-infrared spectral data of milk using deep learning. J. Dairy Sci. 2020, 103, 9355–9367. [Google Scholar] [CrossRef]
  415. Ghaffari, M.H.; Jahanbekam, A.; Post, C.; Sadri, H.; Schuh, K.; Koch, C.; Sauerwein, H. Discovery of different metabotypes in overconditioned dairy cows by means of machine learning. J. Dairy Sci. 2020, 103, 9604–9619. [Google Scholar] [CrossRef]
  416. Khanh, P.C.P.; Tran, D.T.; Duong, V.T.; Thinh, N.H.; Tran, D.N. The new design of cows’ behavior classifier based on acceleration data and proposed feature set. Math. Biosci. Eng. 2020, 17, 2760–2780. [Google Scholar] [CrossRef]
  417. Kaler, J.; Mitsch, J.; Vázquez-Diosdado, J.A.; Bollard, N.; Dottorini, T.; Ellis, K.A. Automated detection of lameness in sheep using machine learning approaches: Novel insights into behavioural differences among lame and non-lame sheep. R. Soc. Open Sci. 2020, 7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  418. Keceli, A.S.; Catal, C.; Kaya, A.; Tekinerdogan, B. Development of a recurrent neural networks-based calving prediction model using activity and behavioral data. Comput. Electron. Agric. 2020, 170, 105285. [Google Scholar] [CrossRef]
  419. Rodríguez Alvarez, J.; Arroqui, M.; Mangudo, P.; Toloza, J.; Jatip, D.; Rodríguez, J.M.; Teyseyre, A.; Sanz, C.; Zunino, A.; Machado, C.; et al. Body condition estimation on cows from depth images using Convolutional Neural Networks. Comput. Electron. Agric. 2018, 155, 12–22. [Google Scholar] [CrossRef] [Green Version]
  420. Gorczyca, M.T.; Milan, H.F.M.; Maia, A.S.C.; Gebremedhin, K.G. Machine learning algorithms to predict core, skin, and hair-coat temperatures of piglets. Comput. Electron. Agric. 2018, 151, 286–294. [Google Scholar] [CrossRef] [Green Version]
  421. Pu, H.; Lian, J.; Fan, M. Automatic Recognition of Flock Behavior of Chickens with Convolutional Neural Network and Kinect Sensor. Int. J. Pattern Recognit. Artif. Intell. 2018, 32. [Google Scholar] [CrossRef]
  422. Rahman, A.; Smith, D.V.; Little, B.; Ingham, A.B.; Greenwood, P.L.; Bishop-Hurley, G.J. Cattle behaviour classification from collar, halter, and ear tag sensors. Inf. Process. Agric. 2018, 5, 124–133. [Google Scholar] [CrossRef]
  423. Shahinfar, S.; Kahn, L. Machine learning approaches for early prediction of adult wool growth and quality in Australian Merino sheep. Comput. Electron. Agric. 2018, 148, 72–81. [Google Scholar] [CrossRef]
  424. Shine, P.; Murphy, M.D.; Upton, J.; Scully, T. Machine-learning algorithms for predicting on-farm direct water and electricity consumption on pasture based dairy farms. Comput. Electron. Agric. 2018, 150, 74–87. [Google Scholar] [CrossRef]
  425. Wang, D.; Tang, J.L.; Zhu, W.; Li, H.; Xin, J.; He, D. Dairy goat detection based on Faster R-CNN from surveillance video. Comput. Electron. Agric. 2018, 154, 443–449. [Google Scholar] [CrossRef]
  426. Elahi, E.; Weijun, C.; Jha, S.K.; Zhang, H. Estimation of realistic renewable and non-renewable energy use targets for livestock production systems utilising an artificial neural network method: A step towards livestock sustainability. Energy 2019, 183, 191–204. [Google Scholar] [CrossRef]
  427. Miller, G.A.; Hyslop, J.J.; Barclay, D.; Edwards, A.; Thomson, W.; Duthie, C.-A. Using 3D Imaging and Machine Learning to Predict Liveweight and Carcass Characteristics of Live Finishing Beef Cattle. Front. Sustain. Food Syst. 2019, 3, 30. [Google Scholar] [CrossRef] [Green Version]
  428. Tian, M.; Guo, H.; Chen, H.; Wang, Q.; Long, C.; Ma, Y. Automated pig counting using deep learning. Comput. Electron. Agric. 2019, 163, 104840. [Google Scholar] [CrossRef]
  429. Alves, A.A.C.; Pinzon, A.C.; da Costa, R.M.; da Silva, M.S.; Vieira, E.H.M.; de Mendonca, I.B.; Lôbo, R.N.B. Multiple regression and machine learning based methods for carcass traits and saleable meat cuts prediction using non-invasive in vivo measurements in commercial lambs. Small Rumin. Res. 2019, 171, 49–56. [Google Scholar] [CrossRef]
  430. Gredell, D.A.; Schroeder, A.R.; Belk, K.E.; Broeckling, C.D.; Heuberger, A.L.; Kim, S.Y.; King, D.A.; Shackelford, S.D.; Sharp, J.L.; Wheeler, T.L.; et al. Comparison of Machine Learning Algorithms for Predictive Modeling of Beef Attributes Using Rapid Evaporative Ionization Mass Spectrometry (REIMS) Data. Sci. Rep. 2019, 9, 1–9. [Google Scholar] [CrossRef] [Green Version]
  431. Shahinfar, S.; Kelman, K.; Kahn, L. Prediction of sheep carcass traits from early-life records using machine learning. Comput. Electron. Agric. 2019, 156, 159–177. [Google Scholar] [CrossRef]
  432. Bakoev, S.; Getmantseva, L.; Kolosova, M.; Kostyunina, O.; Chartier, D.R.; Tatarinova, T.V. PigLeg: Prediction of swine phenotype using machine learning. PeerJ 2020, 2020, e8764. [Google Scholar] [CrossRef]
  433. Fuentes, S.; Gonzalez Viejo, C.; Cullen, B.; Tongson, E.; Chauhan, S.S.; Dunshea, F.R. Artificial Intelligence Applied to a Robotic Dairy Farm to Model Milk Productivity and Quality based on Cow Data and Daily Environmental Parameters. Sensors 2020, 20, 2975. [Google Scholar] [CrossRef]
  434. Cairo, F.C.; Pereira, L.G.R.; Campos, M.M.; Tomich, T.R.; Coelho, S.G.; Lage, C.F.A.; Fonseca, A.P.; Borges, A.M.; Alves, B.R.C.; Dorea, J.R.R. Applying machine learning techniques on feeding behavior data for early estrus detection in dairy heifers. Comput. Electron. Agric. 2020, 179, 105855. [Google Scholar] [CrossRef]
  435. Ma, S.; Yao, Q.; Masuda, T.; Higaki, S.; Yoshioka, K.; Arai, S.; Takamatsu, S.; Itoh, T. Development of an Anomaly Detection System for Cattle Using Infrared Image and Machine Learning. Sens. Mater. 2020, 32, 4139–4149. [Google Scholar] [CrossRef]
  436. Shahinfar, S.; Al-Mamun, H.A.; Park, B.; Kim, S.; Gondro, C. Prediction of marbling score and carcass traits in Korean Hanwoo beef cattle using machine learning methods and synthetic minority oversampling technique. Meat Sci. 2020, 161, 107997. [Google Scholar] [CrossRef]
Figure 1. The four generic categories in agriculture exploiting machine learning techniques, as presented in [12].
Figure 1. The four generic categories in agriculture exploiting machine learning techniques, as presented in [12].
Sensors 21 03758 g001
Figure 2. A graphical illustration of a typical machine learning system.
Figure 2. A graphical illustration of a typical machine learning system.
Sensors 21 03758 g002
Figure 3. The flowchart of the methodology of the present systematic review along with the flow of information regarding the exclusive criteria, based on PRISMA guidelines [75].
Figure 3. The flowchart of the methodology of the present systematic review along with the flow of information regarding the exclusive criteria, based on PRISMA guidelines [75].
Sensors 21 03758 g003
Figure 4. Representative illustration of a simplified confusion matrix.
Figure 4. Representative illustration of a simplified confusion matrix.
Sensors 21 03758 g004
Figure 5. The classification of the reviewed studies according to the field of application.
Figure 5. The classification of the reviewed studies according to the field of application.
Sensors 21 03758 g005
Figure 6. Geographical distribution of the contribution of each country to the research field focusing on machine learning in agriculture.
Figure 6. Geographical distribution of the contribution of each country to the research field focusing on machine learning in agriculture.
Sensors 21 03758 g006
Figure 7. Distribution of the most contributing international journals (published at least four articles) concerning applications of machine learning in agriculture.
Figure 7. Distribution of the most contributing international journals (published at least four articles) concerning applications of machine learning in agriculture.
Sensors 21 03758 g007
Figure 8. Machine Learning models giving the best output.
Figure 8. Machine Learning models giving the best output.
Sensors 21 03758 g008
Figure 9. The 10 most investigated crops using machine learning models; the results refer to crop management.
Figure 9. The 10 most investigated crops using machine learning models; the results refer to crop management.
Sensors 21 03758 g009
Figure 10. Frequency of animal species in studies concerning livestock management by using machine learning models.
Figure 10. Frequency of animal species in studies concerning livestock management by using machine learning models.
Sensors 21 03758 g010
Figure 11. Distribution of the most usual features implemented as input data in the machine learning algorithms for each category/sub-category.
Figure 11. Distribution of the most usual features implemented as input data in the machine learning algorithms for each category/sub-category.
Sensors 21 03758 g011
Figure 12. Distribution of the most usual output features of the machine learning algorithms regarding: (a) Disease detection and (b) Crop quality.
Figure 12. Distribution of the most usual output features of the machine learning algorithms regarding: (a) Disease detection and (b) Crop quality.
Sensors 21 03758 g012
Figure 13. Temporal distribution of the reviewed studies focusing on machine learning in agriculture, which were published within 2018–2020.
Figure 13. Temporal distribution of the reviewed studies focusing on machine learning in agriculture, which were published within 2018–2020.
Sensors 21 03758 g013
Table 1. Summary of the most commonly used evaluation metrics of the reviewed studies.
Table 1. Summary of the most commonly used evaluation metrics of the reviewed studies.
NameFormula
Accuracy(TP + TN)/(TP + FP + FN + TN)
RecallTP/(TP + FN)
PrecisionTP/(TP + FP)
SpecificityTN/(TN + FP)
F1 score(2 × Recall × Precision)/(Recall + Precision)
Table 2. List of the tables appearing in the Appendix A related to: (a) the categories and sub-categories of the machine learning applications in agriculture (Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8 and Table A9) and (b) the abbreviations of machine learning models and algorithms (Table A10 and Table A11, respectively).
Table 2. List of the tables appearing in the Appendix A related to: (a) the categories and sub-categories of the machine learning applications in agriculture (Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8 and Table A9) and (b) the abbreviations of machine learning models and algorithms (Table A10 and Table A11, respectively).
TableContent
A1Crop Management: Yield Prediction
A2Crop Management: Disease Detection
A3Crop Management: Weed Detection
A4Crop Management: Crop Recognition
A5Crop Management: Crop Quality
A6Water Management
A7Soil Management
A8Livestock Management: Animal Welfare
A9Livestock Management: Livestock Production
A10Abbreviations of machine learning models
A11Abbreviations of machine learning algorithms
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Benos, L.; Tagarakis, A.C.; Dolias, G.; Berruto, R.; Kateris, D.; Bochtis, D. Machine Learning in Agriculture: A Comprehensive Updated Review. Sensors 2021, 21, 3758. https://0-doi-org.brum.beds.ac.uk/10.3390/s21113758

AMA Style

Benos L, Tagarakis AC, Dolias G, Berruto R, Kateris D, Bochtis D. Machine Learning in Agriculture: A Comprehensive Updated Review. Sensors. 2021; 21(11):3758. https://0-doi-org.brum.beds.ac.uk/10.3390/s21113758

Chicago/Turabian Style

Benos, Lefteris, Aristotelis C. Tagarakis, Georgios Dolias, Remigio Berruto, Dimitrios Kateris, and Dionysis Bochtis. 2021. "Machine Learning in Agriculture: A Comprehensive Updated Review" Sensors 21, no. 11: 3758. https://0-doi-org.brum.beds.ac.uk/10.3390/s21113758

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop