Next Article in Journal
TMF: A GNSS Tropospheric Mapping Function for the Asymmetrical Neutral Atmosphere
Next Article in Special Issue
Semantic Segmentation of Urban Buildings Using a High-Resolution Network (HRNet) with Channel and Spatial Attention Gates
Previous Article in Journal
Remote Sensing Image Scene Classification via Label Augmentation and Intra-Class Constraint
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tar Spot Disease Quantification Using Unmanned Aircraft Systems (UAS) Data

1
Lyles School of Civil Engineering, Purdue University, 550 Stadium Mall Drive, West Lafayette, IN 47907, USA
2
Department of Botany and Plant Pathology, College of Agriculture, Purdue University, West Lafayette, IN 47907, USA
3
Tecnológico Nacional de México/IT Conkal, Av. Tecnológico s/n, Conkal, Yucatán 97345, Mexico
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(13), 2567; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132567
Submission received: 14 May 2021 / Revised: 22 June 2021 / Accepted: 25 June 2021 / Published: 30 June 2021
(This article belongs to the Special Issue Semantic Interpretation of Remotely Sensed Images)

Abstract

:
Tar spot is a foliar disease of corn characterized by fungal fruiting bodies that resemble tar spots. The disease emerged in the U.S. in 2015, and severe outbreaks in 2018 caused an economic impact on corn yields throughout the Midwest. Adequate epidemiological surveillance and disease quantification are necessary to develop immediate and long-term management strategies. This study presents a measurement framework that evaluates the disease severity of tar spot using unmanned aircraft systems (UAS)-based plant phenotyping and regression techniques. UAS-based plant phenotypic information, such as canopy cover, canopy volume, and vegetation indices, were used as explanatory variables. Visual estimations of disease severity were performed by expert plant pathologists per experiment plot basis and used as response variables. Three regression methods, namely ordinary least squares (OLS), support vector regression (SVR), and multilayer perceptron (MLP), were used to determine an optimal regression method for UAS-based tar spot measurement. The cross-validation results showed that the regression model based on MLP provides the highest accuracy of disease measurements. By training and testing the model with spatially separated datasets, the proposed regression model achieved a Lin’s concordance correlation coefficient (ρc) of 0.82 and a root mean square error (RMSE) of 6.42. This study demonstrated that we could use the proposed UAS-based method for the disease quantification of tar spot, which shows a gradual spectral response as the disease develops.

1. Introduction

Tar spot is a major disease of corn caused by the fungus Phyllachora maydis and is present in 17 countries throughout the Americas; it is an emerging threat to U.S. corn production [1]. Documented yield losses range from 11 to 46% in Latin America and 25 to 30% in the U.S. [2,3,4,5]. First reported in the U.S. in 2015, this disease is characterized by the generation of fungal fruiting bodies (stromata) resembling tar spot on leaves, stems, and the husks of developing ears [1,6]. Under favorable conditions, the disease quickly develops from the late vegetative stage to the early reproductive stage, eventually reaching an exponential phase. Chemical protection has been proven to effectively manage the disease [7], despite a range in hybrid susceptibility and reaction to tar spot [1,5]. Nevertheless, reliable epidemiological surveillance and disease quantification will be essential to lay the foundation for developing immediate and long-term management strategies against tar spot.
Conventionally, plant disease surveillance and quantification have heavily relied on human rater vision. The intensity of disease symptoms or signs at a given time is estimated, recorded, and then used to understand a pathosystem of interest [8]. For instance, plant phenotyping of field crops usually involves trained human experts who walk through densely packed rows of plants and record observations at the plant population level. However, the limited spatial coverage of conventional disease phenotyping often impedes the timely identification of hotspots during early epidemic stages. Moreover, significant variability in disease estimates frequently occurs between human disease raters [8]. In addition, on relatively small plots, the repeated crop assessments required for multitemporal analyses eventually disrupt the integrity of plant canopies and alter the soil compactness and other growing conditions due to frequent walkthroughs by human raters [9]. Consequently, these factors can compromise the quality of the data collected and impact the adequacy of disease-management recommendations.
A successful disease-assessment scheme should be describable, efficient, and appropriate for the task [8]. Researchers have explored innovative methods to detect and quantify plant diseases at the plant population level [10,11,12]. Optical remote sensing technologies, including unmanned aircraft systems (UAS) equipped with sensors, have shed light on its potential use for disease-intensity assessment [13,14,15,16]. UAS allows rapid data collection throughout the growing season at large population scales [11,12,17,18,19,20,21,22], and the data collected can be used to extract plant physiological attributes [23,24,25,26] that reflect changes in plant physiology and disease intensity [27,28]. Ideally, a disease assessment scheme should be both precise and accurate. Therefore, before implementing these promising tools for plant disease detection and quantification at the plant population level, we must test the technology for its reliability and validate the measurements’ accuracy compared to ground truth data [10,11,29].
A recent study showed that structural and chlorophyll vegetation indices from remote sensing images are positively related to tar spot disease severity [30]. However, the disease severity used in this previous study was defined by the area under the disease progress curve (AUDPC), which was a summary statistic of the entire cropping season. Therefore, multiple measurements of field-measured tar spot disease severity were required. To provide tar spot disease severity at a specific date, a new disease measurement approach is necessary that does not require information on previous disease progression.
In this regard, we hypothesize that machine learning methods using UAS-based data can provide the capability to reliably and accurately quantify tar spot severity under field conditions. We used visual observations obtained by expert plant pathologists every 7–14 days to estimate tar spot intensity at a specific date. This approach resulted in a sufficient number of disease scores on a continuous scale. Compared to other studies, we take advantage of a relatively large amount of visual and UAS data associated with a more mechanistic approach when assessing agreement. We used a spectral phenotyping and regression-based approach to study the agreement between visual disease assessment and UAS-based disease measurement. We tested the hypothesis with the following steps: (i) collect tar spot visual estimations and UAS data under field conditions, (ii) develop robust disease quantification procedures for UAS data, and (iii) analyze and assess the agreement between visual disease estimation versus UAS-based measurement of tar spot symptoms.

2. Materials and Methods

2.1. Field Experiments

Experiments were carried out in Indiana during the 2020 cropping cycle at the Pinney Purdue Agriculture Center (PPAC) in LaPorte County (Figure 1). The experimental design for all experiments consisted of a randomized complete block design (Table 1), wherein four replications were divided into blocks, and fungicide was the treatment randomly assigned into blocks. In all experiments, treatment consisted of four-row plots of which only the middle two rows were used for visual evaluation and the estimation of yield. The plant density in all experiments was 84,000 plants ha−1. The length of each row was 10 m, and the distance between rows was 70 cm. The seeds were planted at 3 cm depth, and the distance between plants was 12 cm. In addition, supplementary irrigation was provided for the experiments at the PPAC location. Tar spot was the most prevalent disease throughout all experiments.
All of the research plots (Tar1–4) were designed to investigate the effects of different treatments of tar spot disease. In Tar1, the effect of nine fungicides plus a control was investigated. Applications were carried out at the VT/R1 growth stage of the plant for all fungicides at a dose recommended by the manufacturer. In Tar2, the effect of tillage, three hybrids (two moderately susceptible and one susceptible), and fungicide applications (applied and non-treated) were investigated. Application of fungicide was done at VT/R1 at a dose recommended by the manufacturer. The total treatments performed were twelve, including the control. In Tar3, the effect of two fungicides applied at different growth stages was investigated. Application of fungicides started at the first detection of the disease, V8, VT, and R3, plus a combination of multiple growth stages, resulting in a total of 18 treatments, including two controls. In Tar4, the effect of a single fungicide applied at different growth stages was investigated. Fungicide applications were performed at V8, V10, VT, R2, R3, R4, R5, V8/VT, 14 days after a warning system and a control. A dose recommended by the manufacturer was applied. The spatial distribution of tar spot treatment in the study area is briefly displayed in Figure 2.

2.2. Tar Spot Visual Rating

In practice, multiple types of scales and methods can be found throughout the literature. Our selection was based on published work conducted over the last decade [32,33,34,35,36]. In this study, disease severity was defined as the proportion of diseased leaf area in total leaf area multiplied by 100 to obtain the percentage of disease severity [37].
Weekly, visual estimation of tar spot severity was done at the sub-subplot or plot level on the two middle rows. The disease area included both black stromata in the early disease stages and additional chlorotic or necrotic symptoms on the leaf or canopy that developed afterward. A single value of disease severity estimate for every experimental unit was recorded for each of the low, middle, and upper canopies. Considering the ear leaf as leaf 0 (L0), leaves below or above L0 were identified with a minus (−) or plus (+) sign, respectively. The lower canopy corresponded to L − 3 to the lowest leaf (L − n), mid-canopy from L − 2 to L + 1, and the upper canopy from L + 2 to flag leaf (L + n). Visual severity evaluations were performed on 13–14 dates (Table 2). Evaluations started at VT (tassel) and continued to the R6 (physiological maturity) growth stage in all experiments. Instead of using the entire area of the planted plot, the visual assessment was conducted within the two middle rows to avoid potential treatment overlaps (Figure 3). However, visual rating in the lower canopy was not utilized since optical sensors could not observe vegetation in the lower canopy (Figure A1). The tar spot visual ratings showed an increasing trend over time (Figure 4). The rate and amount of disease progress were different in each research plot.

2.3. Unmanned Aircraft Systems (UAS) Data Collection and UAS Data Preprocessing

Unmanned aircraft systems (UAS) data was acquired by a Phantom 4 Multispectral (DJI, Shenzen, China) equipped with six 1/2.9″ CMOS (complementary metal-oxide-semiconductor) image sensors, including one RGB sensor and five monochrome sensors. The spectral bands of the 5 monochrome sensors are blue (450 nm ± 16 nm), green (560 nm ± 16 nm), red (650 nm ± 16 nm), red edge (730 nm ± 16 nm), and near-infrared (840 nm ± 26 nm). Flight altitude and image overlap were mostly set to 30 m and 75% to obtain fine-resolution orthomosaic and digital surface model (DSM) data with a ground sampling distance (GSD) of approximately 1.5 cm ortho 3.0 cm, respectively. All of the UAS flights were conducted within a day of the dates when the visual rating was performed.
Radiometric calibration was performed on the multispectral UAS images. First, raw at-sensor irradiance was corrected using downwelling light sensor (DLS) orientation. Second, irradiance on the ground was computed as the sum of diffuse and direct sunlight components. Third, per-pixel radiance was calculated considering the effect of dark current, vignetting, and exposure time [38,39]. Finally, the reflectance was computed from per-pixel radiance and irradiance on the ground. Atmospheric correction was not performed on the UAS images since atmospheric attenuation in 0 m to 30 m elevation can generally be neglected [40].
We used multi-temporal UAS data to generate orthomosaic images and DSMs using the structure from motion (SfM) algorithm. The SfM is a 3D reconstruction method widely used for large-scale UAS data collected by consumer-grade or survey-grade cameras. A conventional SfM workflow for UAS data comprises four major steps: finding common feature points in an image dataset, feature-points matching in multiple image pairs, GCP-based orientation to georeferenced 3D models, and the iterative execution of bundle adjustment (BA) to recover camera external orientation parameters (EOP) and scene geometry [41,42]. This study used an SfM processing pipeline provided by Metashape (AgiSoft LLC, St. Petersburg, Russia) to generate orthomosaic images and DSMs.

2.4. Unmanned Aircraft Systems (UAS)-Based Plant Phenotyping

We used the orthomosaic images and DSMs to obtain plant phenotypes of each experimental unit (Figure 5). First, we generated raster images including (a) canopy and non-canopy classification by the Canopeo algorithm [43], (b) canopy height measured from ground surface to the uppermost canopy, (c) excessive greenness (ExG), (d) NDVI, (e) the soil-adjusted vegetation index (SAVI), and (f) the modified soil-adjusted vegetation index (MSAVI). The definition and implication of the vegetation indices from (c–f) can be found in previous publications [44,45,46]. Second, we created a rectangular grid with a dimension of 9 m by 1.5 m for each experimental unit named the level 1 grid (L1G, Figure 3). A total of 24 0.75 m-by-0.75 m square grids (level 2 grid, L2G) were also created in each L1G grid area. As in the visual assessment of tar spot severity, we calculated UAS-derived plant phenotypes in the middle two rows. We designed the individual L2Gs to fit tightly between the adjacent planting rows, making the vertical centerlines of the grids aligned with the planting row. Third, we calculated zonal statistics, including the sum, average, maximum, and standard deviation of the raster data from (a–f) using the L1G and L2G polygons. The number of L1G and L2G phenotypes were 14 and 336 (14 times 24).

2.5. Variable Selection and Data Standardization

Variable selection was conducted to select relevant input variables for regression analysis using L1G UAS phenotypes. An L1G phenotype with the highest positive or negative Pearson’s correlation coefficient with visual ratings was selected for regression with a single input variable. For regression models that use multiple input variables, the best subset of input L1G phenotypes was chosen by Bayesian information criterion (BIC) [47]. We first generated ordinary least square (OLS) models with every possible combination of L1G phenotypes with all observed data, and a subset of input variables was chosen that provides the lowest BIC. The individual input variables were standardized before they were entered into regression models. We transformed the distribution by subtracting the mean, then dividing by the standard deviation, and this standardization process was applied separately for the training and test data sets.

2.6. Regression Methods

Three regression techniques were used to convert UAS phenotypes to tar spot severity. The ordinary least squares method was chosen because of the simplicity of model interpretation. In addition, we also selected support vector regression and multilayer perceptron owing to their good accuracy with generalization capability and to model non-linear processes, respectively.

2.6.1. Ordinary Least Squares (OLS)

The ordinary least squares (OLS) is a least squares technique used to find a regression model by minimizing the sum of squared error between observed values and fitted values [48]. We used a linear regression model with a constant term by the following equation:
yi = xiT β + C + εi,
where yi is a response of the i-th observation (visual rating); xi is an i-th observation of explanatory variables; β is a vector of regression coefficients; C is a constant variable, and εi is an error. The OLS was used to estimate visual ratings of middle and upper canopy layers from single and multiple L1G phenotypes.

2.6.2. Support Vector Regression (SVR)

Support vector regression (SVR) is a regression method that finds an optimal hyperplane using the same principles of support vector machine (SVM). SVR attempts to find a hyperplane that minimizes both magnitudes of the normal vector and prediction error. The generalization capability of SVR is achieved by penalizing data points outside the ε-tube around the estimated function. The objective function of SVR can be written as Equation (2):
1 2 w T w + C i = 1 l ( ξ i + ξ i * )
where w is a normal vector; ξi and ξi* are prediction errors from ε-tube either above or below the estimated function, and C is a regularization parameter that trades-off the flatness of the hyperplane and the sum of the prediction error. The SVR can also solve nonlinear regression problems by mapping data points in a higher dimensional space [49].
This study used a grid search method to find the best hyperparameters for the ε, C, and kernel parameters, according to three types of kernels: linear, polynomial, and radial basis function (RBF) (Table 3) [50,51]. The RBF kernel is defined by the following equation:
κ ( u , v ) = exp ( γ u v 2 )
where u, v are n-dimensional vectors, and γ corresponds to 1/2σ2 in the Gaussian function. The SVR models were used to calculate visual ratings of middle and upper canopy layers using multiple L1G phenotypes.

2.6.3. Multilayer Perceptron (MLP)

Multilayer perceptron (MLP) is a feedforward artificial neural network (ANN) that consists of an input layer, hidden layer, and output layer. Due to the simplicity of the structure and nonlinear modeling capability, MLP has been widely used in various regression problems in plant sciences [52,53].
The MLP was used to model the relationship between the tar spot visual rating of three canopy layers and the L1G or L2G UAS phenotypes. The preliminary result showed that the MLP with a single hidden layer performed better than the MLP with 2–4 hidden layers (Figure 6). Therefore, a grid search was conducted to determine the number of nodes in the single hidden layer. We tested MLP with 5, 10, 20, 40, 80, and 160 hidden nodes for L1G phenotypes and 3, 5, 10, 20, 40, and 80 hidden nodes for L2G phenotypes. In the training process, the mean square error (MSE) was optimized by the adam algorithm (adaptive moment estimation) with a patience of 5. All processing nodes in the MLP model used the rectifier linear unit (ReLU) as the activation function [54,55].

2.6.4. Evaluating the Performance of Regression Models

The performance of regression models was assessed by a cross-validation and transferability test. First, 3-fold cross-validation was repeated 30 times using the data from all study plots. The average values of the coefficient of determination (R2), root mean square error (RMSE), and Lin’s concordance correlation coefficient (ρc) were calculated for each 3-fold cross-validation. Subsequently, average and standard deviation of 30 cross-validation results were obtained. An optimal regression model was chosen based on the statistics of accuracy metrics. Second, a transferability test was conducted to obtain accuracy metrics with spatially separated training and test data. For example, the performance of the MLP model on Tar4 data was assessed by training regression model with Tar1, Tar2, and Tar3 data, then testing the model on Tar4 data.

3. Results

3.1. Correlation Analysis with UAS-Derived Plant Phenotypes

The correlation coefficient between L1G phenotypes and the visual ratings of the middle and upper canopy layers revealed that the average and maximum of MSAVI, NDVI, and SAVI had a negative correlation below −0.8 (Table 4). This indicates that the vegetation index is inversely related to the tar spot disease severity. A higher magnitude of correlation was observed from L1G average values than maximum values. Standard deviation statistics showed a weaker correlation than other statistics. Canopy cover, canopy volume, and ExG-based statistics showed a weaker relationship with visual ratings than MSAVI, NDVI, and SAVI. The strongest correlation observed between MSAVI and the visual rating of the middle and upper canopy was −0.87 and −0.83, respectively.
Multicollinearity among explanatory variables (L1G phenotypes) was investigated using a correlation matrix. As a result, a statistically significant correlation was observed among L1G phenotypes (Figure 7). For example, the correlation between the L1G MSAVI average and the L1G NDVI average was 0.99, indicating a very high positive relationship. Similar results were observed among the phenotypes of the same statistics (average, maximum, and standard deviation) of MSAVI, NDVI, and SAVI. Since L1G phenotypes of multispectral vegetation indices contained redundant information, we only used L1G canopy cover, canopy volume, and statistics of ExG and MSAVI in the variable selection process.
A high correlation with Pearson’s correlation coefficient over 0.90 was observed among L2G phenotypes. For example, the L2G MSAVI average from grid locations 1–24 showed a correlation of over 0.95. A slightly lower correlation coefficient was observed between phenotypes in the northern (grid 1, 13) and southern edges (12, 24), with a Pearson’s correlation of 0.95. Correlation among L2G MSAVI in locations 2–11 and 14–23 produced a correlation coefficient above 0.96. Such multicollinearity was also observable from other L2G UAS phenotypes.

3.2. Variable Selection

Variable selection was performed to select input L1G phenotypes for regression analysis. For a regression model with a single input variable, the L1G average of MSAVI was chosen due to the highest correlation with visual ratings in the middle and upper canopy (Table 4).
Multiple L1G variables were selected based on BIC, as shown in Table 5 and Table 6. For the middle canopy, canopy cover, maximum of ExG, an average of MSAVI, and the standard deviation of MSAVI were chosen. Selected variables for the upper canopy included canopy cover, maximum of ExG, an average of MSAVI, maximum MSAVI, and the standard deviation of MSAVI. The average of MSAVI was commonly included in the best subsets for the middle and upper canopy layers, and the coefficient of average MSAVI had the highest magnitude among other variables. The highest magnitude of the coefficient of MSAVI indicated that average MSAVI is the predominant input variable that explains most of the variance in tar spot disease severity.

3.3. Hyperparameter Tuning of SVR and MLP Models by Grid Search

A set of optimal hyperparameters for the regression model was fine-tuned by grid search. Optimal parameters for SVR models are shown in Table 7. An optimal number of nodes in the first hidden layer for MLP with L1G variables were 80 and 40 for the middle and upper layers, respectively. For MLP models that use L2G variables, an optimal number of nodes was 3 and 5 for the middle and upper canopy, respectively.

3.4. Accuracy of Tar Spot Severity Measurement by Cross-Validation

The average RMSE of repeated cross-validations showed that the MLP model obtained the most accurate results with multiple L1G phenotypes (Table 8). The average RMSE of the MLP-L1G model for the middle and upper canopy was 10.4 and 7.9, respectively. Similar accuracy was achieved by the MLP model of L2G phenotypes with an average RMSE of 10.4 and 8.2 for the middle and upper canopies, respectively. Average RMSE values from OLS or SVR were substantially higher than those obtained by the MLP models. It should be noted that the standard deviation of cross-validated RMSE of the MLP models was generally lower than those of the OLS and SVR, indicating a higher consistency of model performance.
The tar spot visual ratings obtained by the optimal OLS, SVR, and MLP models commonly showed an increasing trend as the ground reference data increased. Figure 8 displays a relationship between tar spot visual ratings and its UAS-based measurement acquired by 3-fold cross-validation, where hollow red, orange, and blue circles represent each test set of 3-fold cross-validation. Compared to the result from MLP models, the OLS and SVR models had a tendency to produce higher variance when the visual rating was in the 30–70 range. Moreover, the OLS and SVR models underestimated the disease severity when the visual rating was in the 60–100 range. Therefore, we conducted a transferability test based on the MLP model with L1G phenotypes.

3.5. Accuracy of Tar Spot Severity Measurement by Transferability Test

The transferability test demonstrated the applicability of UAS-based tar spot measurement under different field locations and management conditions (Figure 9, Figure 10, Figure 11 and Figure 12). A linear trend between visual ratings and UAS measurement was observed in Figure 9a, Figure 10b and Figure 12b either with an overestimating or underestimating trend. A nonlinear trend was observed from Figure 9b and Figure 11a,b, showing an exponentially increasing trend as the visual rating increases. Nevertheless, the concordance between visual ratings and UAS measurement indicated that a statistical relationship between tar spot disease severity and spectral information was captured using the MLP model.
To investigate the cause of lower accuracy in the transferability test, a relationship between visual ratings and L1G average of MSAVI was observed. It should be noted that the MSAVI average had the highest negative correlation with the visual ratings. The scatter plots in Figure 13 revealed that the data distribution in the four study plots was comparable when the visual rating was in the 0–20 range. However, a positive offset of MSAVI was observed from Tar2 and Tar3 when the visual rating was in the 20–100 range. For the most part, the data distribution of Tar1 and Tar4 showed a similar data distribution in the entire range of the visual ratings. Therefore, we selectively used Tar1 and Tar4 data to test the transferability of the proposed approach. The MLP model with L1G phenotypes was trained with Tar1 data and tested on Tar4 data. As a result, the RMSEs of the UAS-based measurements in the middle and upper canopies were 7.61 and 6.42, respectively (Figure 14), making the trend line between the two measures closer to a 1-to-1 line compared to the previous results (Figure 9, Figure 10, Figure 11 and Figure 12).

4. Discussion

Disease measurement based on a regression approach can perform well when training and when the test data contains concurrent statistical data distribution. We found that a difference in plot location and management practices can result in a change in the relationship between visual ratings and UAS-derived plant phenotypes. In addition, the relationships can also change according to climatic conditions and yearly fluctuations in epidemiological factors. As shown in the transferability test, a selective approach that uses the most relevant data as a training set was adequate if enough data is provided. Future research is needed to define the environmental parameters that govern the relationship between plant phenotype and disease severity to effectively confine a training dataset.
There are several drawbacks of using UAS-based plant phenotypes for disease measurement. First, plant phenotype is not a direct measurement of tar spot intensity. Instead, plant phenotypes are determined by a combination of various factors, including plant vigor, water stress, and disease stress. Second, the quality of plant phenotypic data acquired from the image-based approach can be reduced by strong winds, uneven illumination, and image alignment quality. Third, there was also an ambiguity issue with the size and number of spatial grids in plant phenotyping. Although this study tested two gridding schemes (L1G and L2G) and found that L1G produces more accurate results, more research might be required to determine an optimal geometry and size of grids. Despite the above drawbacks, a disease-measurement approach with image-based plant phenotypes has been one of the most frequently utilized methods in UAS research when spatial resolution of the image is insufficient to capture individual disease lesions.
As a starting point of tar spot disease quantification using remotely sensed data, we proposed a hypothesis that UAS data can be effectively used to measure disease severity. Our data-driven approach provided a way to quantify tar spot severity with regression techniques effectively. However, the data were collected at a single location in a single year. Therefore, future studies are required to investigate the reproducibility of this method in space and time. Adding datasets from different years and locations can significantly help our approach to model the complex relationships of UAS phenotypes and the severity of tar spot.
As an alternative method of UAS-based disease measurement, a regression approach based on deep learning can be used. There are two significant advantages when using deep learning methods: (a) information loss during the phenotyping process can be minimized because deep learning uses original pixel values; (b) a complex relationship between pixel values and visual ratings can be established; (c) phenotyping procedure can be omitted since deep learning models can be trained only using orthomosaic and DSM as input.
In addition, we recommend developing a spectral disease index (SDI) for tar spot, which correlates a significant wavelength to tar spot-infected plants’ biochemical or biophysical characteristics [24,56,57]. The generation of SDI using hyperspectral sensors will determine a functional spectral range throughout the different stages of the disease epidemic [24,58], which may improve our accuracy in tar spot disease detection and severity predictions.
This study does not provide the actual applications of the UAS-based tar spot measurement technology. In the future, we will explore the possibility of using cross-analysis and heterogeneous data to predict and manage plant diseases. The results and models reported in this study may be implemented and tested in the next-generation decision support systems for mitigating tar spot disease.

5. Conclusions

This study presents UAS-based disease quantification of tar spot of corn based on spectral phenotyping and regression techniques. We showed that the highest accuracy of the proposed method was obtained by SLP models with a reduced number of lower resolution (L1G) phenotypes. The RMSE and ρc were 10.4 and 0.91 in the middle canopy layer and 7.9 and 0.90 in the upper canopy layer, respectively. In addition, the performance of SLP models that uses 336 higher spatial resolution (L2G) phenotypes was comparable to the best results. Another important finding in this study was that UAS-based disease measurement was possible in the upper (L + 2 to flag leaf) and middle (L − 1 to L + 1) canopy layer. The cross-validation and transferability test results revealed that the accuracy of UAS-based tar spot measurement could improve by training the model with a dataset containing sufficient statistical information between spectral phenotypes and the disease symptoms. It is expected that our demonstrated approach could provide opportunities to detect and monitor plant diseases that show a gradual spectral response in the external plant structures as the disease develops.

Author Contributions

Conceptualization, J.J., C.D.C., D.E.P.T. and S.O.; methodology, S.O., C.G.-C., D.-Y.L., J.J. and A.A.; software, S.O., A.A. and J.J.; validation, S.O., J.J., D.-Y.L., C.G.-C. and J.J.; formal analysis, S.O. and J.J.; investigation, S.O., J.J., D.-Y.L., C.G.-C. and C.D.C.; resources, C.D.C., D.E.P.T. and J.J.; data curation, A.A., C.G.-C., J.C., M.F.-C., B.Z.L., D.-Y.L., A.P.C. and S.O.; writing—original draft preparation, S.O., J.J., D.-Y.L., C.G.-C., C.D.C.; writing—review and editing, S.O., J.J., D.-Y.L., C.G.-C., C.D.C., A.A., M.F.-C. and D.E.P.T.; visualization, S.O., J.J. and C.D.C.; supervision, J.J. and C.D.C.; project administration, D.E.P.T.; funding acquisition, D.E.P.T., C.D.C. and J.J.; field trial conceptualization, establishment, implementation, D.E.P.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Indiana Corn and Marketing Council, grant number 40001376. The establishment of the field trials was supported by the Indiana Corn Marketing Council, FFAR-ROAR, and the USDA National Institute of Food and Agriculture, Hatch Project #IND00162952.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

Jeffrey Ravellette for technical field assistance with this study.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The accuracy of UAS-based tar spot measurement in the lower canopy layer was evaluated using the MLP model with L1G UAS phenotypes. The average RMSE and ρc of three-fold cross validation were 8.48 and 0.71, respectively.
Figure A1. Tar spot disease rating measured by UAS-based MLP-L1G models at the lower canopy level. The MLP-L1G model was trained with a random split of 3-fold cross-validation of all PPAC data.
Figure A1. Tar spot disease rating measured by UAS-based MLP-L1G models at the lower canopy level. The MLP-L1G model was trained with a random split of 3-fold cross-validation of all PPAC data.
Remotesensing 13 02567 g0a1

References

  1. Valle-Torres, J.; Ross, T.J.; Plewa, D.; Avellaneda, M.C.; Check, J.; Chilvers, M.I.; Cruz, A.P.; Lana, F.D.; Groves, C.; Gongora-Canul, C.; et al. Tar spot: An understudied disease threatening corn production in the Americas. Plant Dis. 2020, 104, 2541–2550. [Google Scholar] [CrossRef] [PubMed]
  2. Pereyda-Hernández, J.; Hernández-Morales, J.; Sandoval-Islas, J.S.; Aranda-Ocampo, S.; de León, C.; Gómez-Montiel, N. Etiology and management of tar spot (Phyllachora maydis Maubl.) of maize in Guerrero state, México. Agrociencia 2009, 43, 511–519. [Google Scholar]
  3. Hock, J.; Kranz, J.; Renfro, B. El complejo ‘mancha de asfalto’ de maíz: Su distribucción geográfica, requisitos ambientales e importancia económica en México. Rev. Mex. Fitopatol. 1989, 7, 129–135. [Google Scholar]
  4. Mueller, D.; Wise, K.; Sisson, A. Corn disease management:Corn disease loss estimate from the United States and Ontario, Canada-2017. CP 2007 17 W. Crop Prot. Netw. 2018. [Google Scholar] [CrossRef]
  5. Telenko, D.E.P.; Chilvers, M.I.; Kleczewski, N.; Smith, D.L.; Byrne, A.M.; Devillez, P.; Diallo, T.; Higgins, R.; Joos, D.; Kohn, K.; et al. How tar spot of corn impacted hybrid yields during the 2018 Midwest epidemic. Crop Prot. Netw. 2019. [Google Scholar] [CrossRef]
  6. Ruhl, G.; Romberg, M.K.; Bissonnette, S.; Plewa, D.; Creswell, T.; Wise, K.A. First report of tar spot on corn caused by phyllachora maydis in the United States. Plant Dis. 2016, 100, 1496. [Google Scholar] [CrossRef]
  7. Bajet, N.B.; Renfro, B.L.; Carrasco, J.M.V. Control of tar spot of maize and its effect on yield. Int. J. Pest Manag. 1994, 40, 121–125. [Google Scholar] [CrossRef]
  8. Madden, L.V.; Hughes, G.; van den Bosch, F. The Study of Plant Disease Epidemics; American Phytopathological Society: St. Paul, MN, USA, 2017. [Google Scholar]
  9. Anthony, D.; Detweiler, C. UAV Localization in Row Crops. J. F. Robot. 2017, 34, 1275–1296. [Google Scholar] [CrossRef]
  10. Bock, C.H.; Poole, G.H.; Parker, P.E.; Gottwald, T.R. Plant disease severity estimated visually, by digital photography and image analysis, and by hyperspectral imaging. CRC. Crit. Rev. Plant Sci. 2010, 29, 59–107. [Google Scholar] [CrossRef]
  11. Mahlein, A.K. Plant disease detection by imaging sensors—Parallels and specific demands for precision agriculture and plant phenotyping. Plant Dis. 2016, 100, 241–251. [Google Scholar] [CrossRef] [Green Version]
  12. Dash, J.P.; Watt, M.S.; Pearse, G.D.; Heaphy, M.; Dungey, H.S. Assessing very high resolution UAV imagery for monitoring forest health during a simulated disease outbreak. ISPRS J. Photogramm. Remote Sens. 2017, 131, 1–14. [Google Scholar] [CrossRef]
  13. Bai, G.; Ge, Y.; Hussain, W.; Baenziger, P.S.; Graef, G. A multi-sensor system for high throughput field phenotyping in soybean and wheat breeding. Comput. Electron. Agric. 2016, 128, 181–192. [Google Scholar] [CrossRef] [Green Version]
  14. Ge, Y.; Bai, G.; Stoerger, V.; Schnable, J.C. Temporal dynamics of maize plant growth, water use, and leaf water content using automated high throughput RGB and hyperspectral imaging. Comput. Electron. Agric. 2016, 127, 625–632. [Google Scholar] [CrossRef] [Green Version]
  15. Fahlgren, N.; Feldman, M.; Gehan, M.A.; Wilson, M.S.; Shyu, C.; Bryant, D.W.; Hill, S.T.; McEntee, C.J.; Warnasooriya, S.N.; Kumar, I.; et al. A versatile phenotyping system and analytics platform reveals diverse temporal responses to water availability in Setaria. Mol. Plant 2015, 8, 1520–1535. [Google Scholar] [CrossRef] [Green Version]
  16. Bock, C.H.; Barbedo, J.G.A.; Del Ponte, E.M.; Bohnenkamp, D.; Mahlein, A.-K. From visual estimates to fully automated sensor-based measurements of plant disease severity: Status and challenges for improving accuracy. Phytopathol. Res. 2020, 2, 1–30. [Google Scholar] [CrossRef] [Green Version]
  17. Ashapure, A.; Jung, J.; Yeom, J.; Chang, A.; Maeda, M.; Maeda, A.; Landivar, J. A novel framework to detect conventional tillage and no-tillage cropping system effect on cotton growth and development using multi-temporal UAS data. ISPRS J. Photogramm. Remote Sens. 2019, 152, 49–64. [Google Scholar] [CrossRef]
  18. Ashapure, A.; Jung, J.; Chang, A.; Oh, S.; Maeda, M.; Landivar, J. A comparative study of RGB and multispectral sensor-based cotton canopy cover modelling using multi-temporal UAS data. Remote Sens. 2019, 11, 2757. [Google Scholar] [CrossRef] [Green Version]
  19. Enciso, J.; Avila, C.A.; Jung, J.; Elsayed-Farag, S.; Chang, A.; Yeom, J.; Landivar, J.; Maeda, M.; Chavez, J.C. Validation of agronomic UAV and field measurements for tomato varieties. Comput. Electron. Agric. 2019, 158, 278–283. [Google Scholar] [CrossRef]
  20. Yeom, J.; Jung, J.; Chang, A.; Maeda, M.; Landivar, J. Automated open cotton boll detection for yield estimation using unmanned aircraft vehicle (UAV) data. Remote Sens. 2018, 10, 1895. [Google Scholar] [CrossRef] [Green Version]
  21. Jung, J.; Maeda, M.; Chang, A.; Landivar, J.; Yeom, J.; McGinty, J. Unmanned aerial system assisted framework for the selection of high yielding cotton genotypes. Comput. Electron. Agric. 2018, 152, 74–81. [Google Scholar] [CrossRef]
  22. Chang, A.; Jung, J.; Maeda, M.M.; Landivar, J. Crop height monitoring with digital imagery from Unmanned Aerial System (UAS). Comput. Electron. Agric. 2017, 141, 232–237. [Google Scholar] [CrossRef]
  23. Hwang, S.F.; Wang, H.; Gossen, B.D.; Chang, K.F.; Turnbull, G.D.; Howard, R.J. Impact of foliar diseases on photosynthesis, protein content and seed yield of alfalfa and efficacy of fungicide application. Eur. J. Plant Pathol. 2006, 115, 389–399. [Google Scholar] [CrossRef]
  24. Mahlein, A.K.; Rumpf, T.; Welke, P.; Dehne, H.W.; Plümer, L.; Steiner, U.; Oerke, E.C. Development of spectral indices for detecting and identifying plant diseases. Remote Sens. Environ. 2013, 128, 21–30. [Google Scholar] [CrossRef]
  25. Zaman-Allah, M.; Vergara, O.; Araus, J.L.; Tarekegne, A.; Magorokosho, C.; Zarco-Tejada, P.J.; Hornero, A.; Albà, A.H.; Das, B.; Craufurd, P.; et al. Unmanned aerial platform-based multi-spectral imaging for field phenotyping of maize. Plant Methods 2015, 11, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Couture, J.J.; Singh, A.; Charkowski, A.O.; Groves, R.L.; Gray, S.M.; Bethke, P.C.; Townsend, P.A. Integrating Spectroscopy with Potato Disease Management. Plant Dis. 2018, 102, 2233–2240. [Google Scholar] [CrossRef] [Green Version]
  27. Liebisch, F.; Kirchgessner, N.; Schneider, D.; Walter, A.; Hund, A. Remote, aerial phenotyping of maize traits with a mobile multi-sensor approach. Plant Methods 2015, 11, 9–20. [Google Scholar] [CrossRef] [Green Version]
  28. Chivasa, W.; Mutanga, O.; Biradar, C. UAV-based multispectral phenotyping for disease resistance to accelerate crop improvement under changing climate conditions. Remote Sens. 2020, 12, 2445. [Google Scholar] [CrossRef]
  29. Bock, C.H.; Nutter, F.W. Detection and measurement of plant disease symptoms using visible-wavelength photography and image analysis. CAB Rev. 2011, 6, 1–15. [Google Scholar] [CrossRef] [Green Version]
  30. Loladze, A.; Rodrigues, F.A.; Toledo, F.; Vicente, F.S.; Gérard, B.; Boddupalli, M.P. Application of remote sensing for phenotyping tar spot complex resistance in maize. Front. Plant Sci. 2019, 10, 552. [Google Scholar] [CrossRef] [Green Version]
  31. Google Maps. Available online: https://www.google.com/maps/place/41%C2%B027’20.0%22N+86%C2%B056’29.7%22W/@41.455894,-86.9377691,961m/data=!3m1!1e3!4m6!3m5!1s0x88119db35908d59d:0xe2e10c4ade176d89!7e2!8m2!3d41.4555489!4d-86.941579 (accessed on 28 June 2021).
  32. Acquaah, G. Principles of Plant Genetics and Breeding: Second Edition; John Wiley & Sons: Hoboken, NJ, USA, 2012; ISBN 9780470664766. [Google Scholar]
  33. Cruppe, G.; Cruz, C.D.; Peterson, G.; Pedley, K.; Asif, M.; Fritz, A.; Calderon, L.; da Silva, C.L.; Todd, T.; Kuhnem, P.; et al. Novel sources of wheat head blast resistance in modern breeding lines and wheat wild relatives. Plant Dis. 2020, 104, 35–43. [Google Scholar] [CrossRef]
  34. Cruz, C.D.; Magarey, R.D.; Christie, D.N.; Fowler, G.A.; Fernandes, J.M.; Bockus, W.W.; Valent, B.; Stack, J.P. Climate suitability for Magnaporthe oryzae Triticum pathotype in the United States. Plant Dis. 2016, 100, 1979–1987. [Google Scholar] [CrossRef] [Green Version]
  35. Fernandez-Campos, M.; Gongora-Canul, C.; Das, S.; Kabir, M.; Valent, B.; Cruz, C. Epidemiological criteria to support breeding tactics against the emerging, high-consequence wheat blast disease. Plant Dis. 2020, 104, 2252–2261. [Google Scholar] [CrossRef]
  36. Vales, M.; Anzoátegui, T.; Huallpa, B.; Cazon, M.I. Review on resistance to wheat blast disease (Magnaporthe oryzae Triticum) from the breeder point-of-view: Use of the experience on resistance to rice blast disease. Euphytica 2018, 214, 1. [Google Scholar] [CrossRef]
  37. Nutter, F.W.; Esker, P.D.; Netto, R.A.C. Disease assessment concepts and the advancements made in improving the accuracy and precision of plant disease data. Eur. J. Plant Pathol. 2006, 115, 95–103. [Google Scholar] [CrossRef]
  38. Pourazar, H.; Samadzadegan, F.; Dadrass Javan, F. Aerial multispectral imagery for plant disease detection: Radiometric calibration necessity assessment. Eur. J. Remote Sens. 2019, 52, 17–31. [Google Scholar] [CrossRef] [Green Version]
  39. Cao, S.; Danielson, B.; Clare, S.; Koenig, S.; Campos-Vargas, C.; Sanchez-Azofeifa, A. Radiometric calibration assessments for UAS-borne multispectral cameras: Laboratory and field protocols. ISPRS J. Photogramm. Remote Sens. 2019, 149, 132–145. [Google Scholar] [CrossRef]
  40. Yu, X.; Liu, Q.; Liu, X.; Liu, X.; Wang, Y. A physical-based atmospheric correction algorithm of unmanned aerial vehicles images and its utility analysis. Int. J. Remote Sens. 2017, 38, 3101–3112. [Google Scholar] [CrossRef]
  41. Schonberger, J.L.; Frahm, J.M. Structure-from-Motion Revisited. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; IEEE Computer Society: Washington, DC, USA, 2016; pp. 4104–4113. [Google Scholar]
  42. Jiang, S.; Jiang, C.; Jiang, W. Efficient structure from motion for large-scale UAV images: A review and a comparison of SfM tools. ISPRS J. Photogramm. Remote Sens. 2020, 167, 230–251. [Google Scholar] [CrossRef]
  43. Patrignani, A.; Ochsner, T.E. Canopeo: A powerful new tool for measuring fractional green canopy cover. Agron. J. 2015, 107, 2312–2320. [Google Scholar] [CrossRef] [Green Version]
  44. Bannari, A.; Morin, D.; Bonn, F.; Huete, A.R. A review of vegetation indices. Remote Sens. Rev. 1995, 13, 95–120. [Google Scholar] [CrossRef]
  45. Eitel, J.U.H.; Vierling, L.A.; Litvak, M.E.; Long, D.S.; Schulthess, U.; Ager, A.A.; Krofcheck, D.J.; Stoscheck, L. Broadband, red-edge information from satellites improves early stress detection in a New Mexico conifer woodland. Remote Sens. Environ. 2011, 115, 3640–3646. [Google Scholar] [CrossRef]
  46. Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  47. Neath, A.A.; Cavanaugh, J.E. The Bayesian information criterion: Background, derivation, and applications. Wiley Interdiscip. Rev. Comput. Stat. 2012, 4, 199–203. [Google Scholar] [CrossRef]
  48. Hutcheson, G. Ordinary Least-Squares Regression. In The Multivariate Social Scientist; SAGE Publications: New York, NY, USA, 2011. [Google Scholar]
  49. Chang, C.C.; Lin, C.J. LIBSVM: A Library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
  50. Bhanja, S.N.; Malakar, P.; Mukherjee, A.; Rodell, M.; Mitra, P.; Sarkar, S. Using Satellite-Based Vegetation Cover as Indicator of Groundwater Storage in Natural Vegetation Areas. Geophys. Res. Lett. 2019, 46, 8082–8092. [Google Scholar] [CrossRef]
  51. Hsu, C.W.; Chang, C.C.; Lin, C.J. A Practical Guide to Support Vector Classification. BJU Int. 2008, 2003, 1–16. [Google Scholar]
  52. Pacifico, L.D.S.; Macario, V.; Oliveira, J.F.L. Plant Classification Using Artificial Neural Networks. In Proceedings of the International Joint Conference on Neural Networks, Rio de Janeiro, Brazil, 8–13 July 2018. [Google Scholar]
  53. Zhang, Z.; Masjedi, A.; Zhao, J.; Crawford, M.M. Prediction of sorghum biomass based on image based features derived from time series of UAV images. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017. [Google Scholar]
  54. Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015—Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  55. Schmidt-Hieber, J. Nonparametric regression using deep neural networks with relu activation function. Ann. Stat. 2020, 48, 1875–1897. [Google Scholar] [CrossRef]
  56. Gitelson, A.A.; Merzlyak, M.N. Signature analysis of leaf reflectance spectra: Algorithm development for remote sensing of chlorophyll. J. Plant Physiol. 1996, 148, 494–500. [Google Scholar] [CrossRef]
  57. Hatfield, J.L.; Gitelson, A.A.; Schepers, J.S.; Walthall, C.L. Application of spectral remote sensing for agronomic decisions. Agron. J. 2008, 100, S-117–S-131. [Google Scholar] [CrossRef] [Green Version]
  58. Wahabzada, M.; Mahlein, A.K.; Bauckhage, C.; Steiner, U.; Oerke, E.C.; Kersting, K. Plant Phenotyping using Probabilistic Topic Models: Uncovering the Hyperspectral Language of Plants. Sci. Rep. 2016, 6, 22482. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Study area located at the Purdue Pinney Purdue Agricultural Center (PPAC), Indiana, USA. Unmanned aircraft systems (UAS) images of research plots (Trial Tar1–4) were overlaid on a Google Map satellite image [31].
Figure 1. Study area located at the Purdue Pinney Purdue Agricultural Center (PPAC), Indiana, USA. Unmanned aircraft systems (UAS) images of research plots (Trial Tar1–4) were overlaid on a Google Map satellite image [31].
Remotesensing 13 02567 g001
Figure 2. Tar spot treatment applied in the research plots. The number in each experimental unit is the amount of fungicide applied in l/ha. The distance between research plots was modified for visual purposes. Fungicide treatments and hybrid were withheld due to confidentiality restrictions.
Figure 2. Tar spot treatment applied in the research plots. The number in each experimental unit is the amount of fungicide applied in l/ha. The distance between research plots was modified for visual purposes. Fungicide treatments and hybrid were withheld due to confidentiality restrictions.
Remotesensing 13 02567 g002
Figure 3. Spatial gridding scheme used for unmanned aircraft systems (UAS)-based crop phenotyping.
Figure 3. Spatial gridding scheme used for unmanned aircraft systems (UAS)-based crop phenotyping.
Remotesensing 13 02567 g003
Figure 4. Tar spot disease rating in (a) middle and (b) upper canopy layers. The average disease severity was shown as dots on the solid lines. The band shaded with color represents the standard deviation of visual rating.
Figure 4. Tar spot disease rating in (a) middle and (b) upper canopy layers. The average disease severity was shown as dots on the solid lines. The band shaded with color represents the standard deviation of visual rating.
Remotesensing 13 02567 g004
Figure 5. Preprocessing and plant phenotyping procedure of unmanned aircraft systems (UAS) data. * avg: average, ** stdev: standard deviation.
Figure 5. Preprocessing and plant phenotyping procedure of unmanned aircraft systems (UAS) data. * avg: average, ** stdev: standard deviation.
Remotesensing 13 02567 g005
Figure 6. Architecture of multilayer perceptron (MLP) used for UAS-based tar spot measurement. An optimal number of input data and processing nodes in the hidden layer vary according to the spatial resolution of explanatory variables (level 1 grid or level 2 grid) and response variables (visual rating in the middle or upper canopy layer).
Figure 6. Architecture of multilayer perceptron (MLP) used for UAS-based tar spot measurement. An optimal number of input data and processing nodes in the hidden layer vary according to the spatial resolution of explanatory variables (level 1 grid or level 2 grid) and response variables (visual rating in the middle or upper canopy layer).
Remotesensing 13 02567 g006
Figure 7. Correlation coefficients between level 1 grid (L1G) unmanned aircraft system (UAS) phenotypes. CCCP: canopy cover derived from the Canopeo algorithm, CV: canopy volume, EXG: average of excessive greenness, MSAVI: modified soil-adjusted vegetation index, NDVI: normalized difference vegetation index, SAVI: soil-adjusted vegetation index, avg: average, mx: maximum, sd: standard deviation.
Figure 7. Correlation coefficients between level 1 grid (L1G) unmanned aircraft system (UAS) phenotypes. CCCP: canopy cover derived from the Canopeo algorithm, CV: canopy volume, EXG: average of excessive greenness, MSAVI: modified soil-adjusted vegetation index, NDVI: normalized difference vegetation index, SAVI: soil-adjusted vegetation index, avg: average, mx: maximum, sd: standard deviation.
Remotesensing 13 02567 g007
Figure 8. Tar spot disease rating measured by UAS-based MLP-L1G models at the (a) middle and (b) upper canopy level. The MLP-L1G model was trained with a random split of 3-fold cross-validation of all PPAC data.
Figure 8. Tar spot disease rating measured by UAS-based MLP-L1G models at the (a) middle and (b) upper canopy level. The MLP-L1G model was trained with a random split of 3-fold cross-validation of all PPAC data.
Remotesensing 13 02567 g008
Figure 9. Tar spot disease rating in Tar1 area measured by UAS-based MLP-L1G model at the (a) middle and (b) upper canopy level. The MLP-L1G model was trained with Tar2, Tar3, and Tar4 data.
Figure 9. Tar spot disease rating in Tar1 area measured by UAS-based MLP-L1G model at the (a) middle and (b) upper canopy level. The MLP-L1G model was trained with Tar2, Tar3, and Tar4 data.
Remotesensing 13 02567 g009
Figure 10. Tar spot disease rating in Tar2 area measured by UAS-based MLP-L1G model at the (a) middle and (b) upper canopy level. The MLP-L1G model was trained with Tar1, Tar3, and Tar4 data.
Figure 10. Tar spot disease rating in Tar2 area measured by UAS-based MLP-L1G model at the (a) middle and (b) upper canopy level. The MLP-L1G model was trained with Tar1, Tar3, and Tar4 data.
Remotesensing 13 02567 g010
Figure 11. Tar spot disease rating in Tar3 area measured by UAS-based MLP-L1G model at the (a) middle and (b) upper canopy level. The MLP-L1G model was trained with Tar1, Tar2, and Tar4 data.
Figure 11. Tar spot disease rating in Tar3 area measured by UAS-based MLP-L1G model at the (a) middle and (b) upper canopy level. The MLP-L1G model was trained with Tar1, Tar2, and Tar4 data.
Remotesensing 13 02567 g011
Figure 12. Tar spot disease rating in Tar4 area measured by UAS-based MLP-L1G model at the (a) middle and (b) upper canopy level. The MLP-L1G model was trained with Tar1, Tar2, and Tar3 data.
Figure 12. Tar spot disease rating in Tar4 area measured by UAS-based MLP-L1G model at the (a) middle and (b) upper canopy level. The MLP-L1G model was trained with Tar1, Tar2, and Tar3 data.
Remotesensing 13 02567 g012
Figure 13. Tar spot disease rating and level 1 grid (L1G) MSAVI average at the (a) middle and (b) upper canopy level.
Figure 13. Tar spot disease rating and level 1 grid (L1G) MSAVI average at the (a) middle and (b) upper canopy level.
Remotesensing 13 02567 g013
Figure 14. Tar spot disease rating in Tar4 area measured by UAS-based MLP-L1G model at the (a) middle and (b) upper canopy level. The MLP-L1G model was trained with Tar1 data. A single observer estimated most of the tar spot visual ratings of the Tar1 and Tar4.
Figure 14. Tar spot disease rating in Tar4 area measured by UAS-based MLP-L1G model at the (a) middle and (b) upper canopy level. The MLP-L1G model was trained with Tar1 data. A single observer estimated most of the tar spot visual ratings of the Tar1 and Tar4.
Remotesensing 13 02567 g014
Table 1. Description of the experiments at the Pinney Purdue Agricultural Center (PPAC) in Indiana, USA, established in the 2020 cropping cycle.
Table 1. Description of the experiments at the Pinney Purdue Agricultural Center (PPAC) in Indiana, USA, established in the 2020 cropping cycle.
Experiment NamePlanting DateNumber of TreatmentsNumber of Hybrid (s)Number of Fungicide TreatmentsTillage Type
Trial Tar 19 June 20201019 + 1 (non-treated)Strip
Trial Tar 26 June 20201231 + 1 (non-treated)Strip, conventional
Trial Tar 39 June 202018116 + 2 (non-treated)Strip
Trial Tar 48 June 20201019 + 1 (non-treated)Strip
Table 2. Data acquisition dates of tar spot visual ratings and unmanned aircraft systems (UAS) data. Dates are displayed in MM/DD format for brevity. A: available, N/A: not available.
Table 2. Data acquisition dates of tar spot visual ratings and unmanned aircraft systems (UAS) data. Dates are displayed in MM/DD format for brevity. A: available, N/A: not available.
Tar1Tar2Tar3Tar4
Visual RatingUAS DataVisual RatingUAS DataVisual RatingUAS DataVisual RatingUAS Data
07/13AAAAN/AAAA
07/23AAAAAAAA
07/30AAAAAAAN/A
08/06AAAAAAAA
08/13N/AN/AAN/AN/AN/AN/AN/A
08/17AAAAAAAA
08/20AAAAAAAA
08/25AAAAAAAA
09/03AN/AAN/AAN/AAN/A
09/10AAAAAAAA
09/15AAAAAAAA
09/22AAAAAAAA
09/29AAAAAAAA
10/06AAAAAAAA
10/13AAAAAAN/AN/A
Table 3. List of support vector regression (SVR) hyperparameters used in the grid search.
Table 3. List of support vector regression (SVR) hyperparameters used in the grid search.
KernelεCKernel Parameters
Linear0.05, 0.10, 0.150.1, 1, 10, 100Not applicable
Polynomial0.05, 0.10, 0.150.1, 1, 10, 100Degree of polynomial = 2, 3
Radial basis function (RBF)0.05, 0.10, 0.150.1, 1, 10, 100γ = 10−5, 10−3, 10−1, 10
Table 4. Pearson’s correlation coefficient between visual ratings and level 1 grid (L1G) unmanned aircraft systems (UAS) phenotypes. * avg: average, ** max: maximum, *** stdev: standard deviation.
Table 4. Pearson’s correlation coefficient between visual ratings and level 1 grid (L1G) unmanned aircraft systems (UAS) phenotypes. * avg: average, ** max: maximum, *** stdev: standard deviation.
KernelCorrelation of Level 1 Grid UAS Phenotype and Visual Rating in the
Middle CanopyUpper Canopy
Canopy cover−0.72−0.67
Canopy volume+0.09−0.03
ExG avg *−0.41−0.36
ExG max **−0.41−0.37
EXG stdev ***−0.39−0.33
MSAVI avg−0.87−0.83
MSAVI max−0.82−0.81
MSAVI stdev+0.71+0.62
NDVI avg−0.86−0.82
NDVI max−0.82−0.81
NDVI stdev+0.56+0.45
SAVI avg−0.86−0.82
SAVI max−0.82−0.81
SAVI stdev+0.57+0.47
Table 5. Ordinary least square results for middle canopy. R2, adjusted R2, and BIC were 0.78, 0.78, and 1.80 × 104, respectively.
Table 5. Ordinary least square results for middle canopy. R2, adjusted R2, and BIC were 0.78, 0.78, and 1.80 × 104, respectively.
CoefficientStandard Errortp > |t|
Constant22.380.27481.73<0.001
Canopy cover−3.490.632−5.52<0.001
Maximum of ExG1.840.3545.12<0.001
Average of MSAVI−44.420.939−47.32<0.001
Standard deviation of MSAVI−8.550.663−12.89<0.001
Table 6. Ordinary least square results for the upper canopy. R2, adjusted R2, and BIC were 0.76, 0.76, and 1.53 × 104, respectively.
Table 6. Ordinary least square results for the upper canopy. R2, adjusted R2, and BIC were 0.76, 0.76, and 1.53 × 104, respectively.
CoefficientStandard Errortp > |t|
Constant14.000.21166.23<0.001
Canopy cover−3.020.506−5.97<0.001
Maximum of ExG2.690.3058.81<0.001
Average of MSAVI−31.541.661−18.98<0.001
Maximum of MSAVI−3.491.050−3.330.001
Standard deviation of MSAVI−9.650.682−14.15<0.001
Table 7. Optimal hyperparameters of SVR models.
Table 7. Optimal hyperparameters of SVR models.
Canopy LayerKernelεCKernel Parameters
Middle canopyRBF0.051γ = 0.1
Upper canopyPolynomial0.050.1Degree of polynomial = 3
Table 8. Average and standard deviation of RMSE of unmanned aircraft system (UAS)-based tar spot disease measurement models.
Table 8. Average and standard deviation of RMSE of unmanned aircraft system (UAS)-based tar spot disease measurement models.
Average RMSE of 30 Cross-Validation TrialsStandard Deviation of RMSE of 30 Cross-Validation Trials
Middle CanopyUpper CanopyMiddle CanopyUpper Canopy
OLS with single L1G phenotypes12.410.00.090.08
OLS with multiple L1G phenotypes11.99.00.090.07
SVR with multiple L1G phenotypes11.410.80.120.13
MLP with multiple L1G phenotypes10.47.90.070.04
MLP with all L2G phenotypes10.48.20.060.06
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Oh, S.; Lee, D.-Y.; Gongora-Canul, C.; Ashapure, A.; Carpenter, J.; Cruz, A.P.; Fernandez-Campos, M.; Lane, B.Z.; Telenko, D.E.P.; Jung, J.; et al. Tar Spot Disease Quantification Using Unmanned Aircraft Systems (UAS) Data. Remote Sens. 2021, 13, 2567. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132567

AMA Style

Oh S, Lee D-Y, Gongora-Canul C, Ashapure A, Carpenter J, Cruz AP, Fernandez-Campos M, Lane BZ, Telenko DEP, Jung J, et al. Tar Spot Disease Quantification Using Unmanned Aircraft Systems (UAS) Data. Remote Sensing. 2021; 13(13):2567. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132567

Chicago/Turabian Style

Oh, Sungchan, Da-Young Lee, Carlos Gongora-Canul, Akash Ashapure, Joshua Carpenter, A. P. Cruz, Mariela Fernandez-Campos, Brenden Z. Lane, Darcy E. P. Telenko, Jinha Jung, and et al. 2021. "Tar Spot Disease Quantification Using Unmanned Aircraft Systems (UAS) Data" Remote Sensing 13, no. 13: 2567. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132567

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop