Next Article in Journal
The Role of Earth Observation in an Integrated Deprived Area Mapping “System” for Low-to-Middle Income Countries
Previous Article in Journal
DSCALE_mod16: A Model for Disaggregating Microwave Satellite Soil Moisture with Land Surface Evapotranspiration Products and Gridded Meteorological Data
Previous Article in Special Issue
Determining the Suitable Number of Ground Control Points for UAS Images Georeferencing by Varying Number and Spatial Distribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Crop Loss Evaluation Using Digital Surface Models from Unmanned Aerial Vehicles Data

by
Virginia E. Garcia Millan
1,*,
Cassidy Rankine
2 and
G. Arturo Sanchez-Azofeifa
3
1
Department of Physical Geography and Ecosystem Science, Lund University, Sölvegatan 12, 223 62 Lund, Sweden
2
Skymatics, Calgary, AB T2P3C5, Canada
3
Department of Earth and Atmospheric Sciences, University of Alberta, Edmonton, AB T6G 2E7, Canada
*
Author to whom correspondence should be addressed.
Submission received: 31 January 2020 / Revised: 9 March 2020 / Accepted: 11 March 2020 / Published: 18 March 2020

Abstract

:
Precision agriculture and Unmanned Aerial Vehicles (UAV) are revolutionizing agriculture management methods. Remote sensing data, image analysis and Digital Surface Models derived from Structure from Motion and Multi-View Stereopsis offer new and fast methods to detect the needs of crops, greatly improving crops efficiency. In this study, we present a tool to detect and estimate crop damage after a disturbance (i.e., weather event, wildlife attacks or fires). The types of damage that are addressed in this study affect crop structure (i.e., plants are bent or gone), in the shape of depressions in the crop canopy. The aim of this study was to evaluate the performance of four unsupervised methods based on terrain analyses, for the detection of damaged crops in UAV 3D models: slope detection, variance analysis, geomorphology classification and cloth simulation filter. A full workflow was designed and described in this article that involves the postprocessing of the raw results from the terrain analyses, for a refinement in the detection of damages. Our results show that all four methods performed similarly well after postprocessing—reaching an accuracy above to 90%—in the detection of severe crop damage, without the need of training data. The results of this study suggest that the used methods are effective and independent of the crop type, crop damage and growth stage. However, only severe damages were detected with this workflow. Other factors such as data volume, processing time, number of processing steps and spatial distribution of targets and errors are discussed in this article for the selection of the most appropriate method. Among the four tested methods, slope analysis involves less processing steps, generates the smallest data volume, is the fastest of methods and resulted in best spatial distribution of matches. Thus, it was selected as the most efficient method for crop damage detection.

Graphical Abstract

1. Introduction

At present, around an 11% of the global land surface (13.4 billion ha) is used in crop production (arable land and land under permanent crops) [1]. According to data of the World Fact Book [2], in 2015 agriculture accounted for 5.9% of the gross domestic product (GDP) worldwide. However, even though other economic sectors are more profitable for the national economies, the entire world population depends on agriculture and farming. This sector is under a high risk of investment given the unpredictable factors that can affect the yield, such as weather conditions, natural disturbances and market fluctuations. Furthermore, growing season coincides with the peak of disturbances: fires, summer storms, breeding of herbivores, flourishing of plagues and diseases, etc. [3,4,5].
Under many climate change scenarios, many authors point out that human systems developed during the Holocene (such as agriculture) are under risk of disappearing or at least be compromised in the Anthropocene era that we have entered, and it is expected that this situation might become worse in the coming years [6,7,8]. While the total world precipitation has not significantly changed in the last decade, the distribution of rainfall across the year and space has changed, implying relevant effects in biomes and agriculture lands [9].
Farmers and agronomists are aware of the risk of their business, with an eye on possible changes on climate. Thus, in the literature we found many studies related to alternative management practices to face climate change [10,11]. Some effects, unfortunately, cannot be prevented, such as floods, storms and fires. For this reason, an accurate evaluation and prediction of damages is of major importance for farmers and stakeholders in the agriculture business, including insurance companies. Many farmers and agriculture corporations invest in insurance in case they lose part or the totality of the yield. Traditional crop-loss assessment methods are slow, labour intensive, biased by restricted field access, and subjective from adjuster to adjuster. The subjectivity of damage estimation leads in many cases to disagreements between farmers and adjusters, in terms of how many hectares are insurable. Both crop insurers and farmers are looking for ways to improve resource management and claims reporting. The current paper proposes more objective and accurate methods to calculate crop damages by means of Unmanned Aerial Vehicles (UAV) remote sensing.
Precision agriculture is a farming management concept based on observing, measuring and responding to inter and intra-field variability in crops using remote sensing tools, such as satellites, planes or UAVs [12,13,14,15]. A review on precision agriculture companies reveals that they focus on very-high resolution sensors [16,17,18], multispectral cameras [19,20,21], hyperspectral sensors [22,23,24], thermal sensing [25] and wireless sensor networks [25,26,27,28], to measure plant and soil properties such as plant nutrient and water content and soil humidity. At the beginning, precision agriculture made use of light-weight remotely controlled airplanes, but in recent years, UAVs are being more and more used in this field because of their versatility [18]. Some authors have developed remote sensing methods for the evaluation of crop damage, such as the effects of droughts [29], the detection of fungal infection [30], crop losses in floods [31], and detection of crops affected by fires [32]. Nevertheless, to our knowledge, there are not many precision agriculture algorithms that account for digital canopy models of plants and damage quantification. Moreover, the existing literature only shows case studies of one or few crop types and/or damage types.
In the last decade, low-cost UAV technology has been significantly developed, encouraging the use of UAVs for commercial applications, such as precision agriculture, oil and gas, urban planning, etc., [18,33]. Now it is possible to find low-cost UAVs, cameras and software in the market that collect high-resolution georeferenced imagery, which can be later used to generate Digital Surface Models (DSM) and 3D point clouds by means of Structure from Motion (SfM) and Multi-View Stereopsis (MVS) [34,35,36,37]. Specifically, low-cost light UAVs dotted with RGB cameras are the most preferred Unmanned Aerial System (UAS) by many users, farmers among others. Whereas RGB cameras are limited by the development of spectral products, beyond scouting and monitoring, DSMs avoid artefacts that appear often in the UAV imagery, such as shadows projected from objects in the surface or clouds, light differences during the flight time or different colours caused by soil moisture or wet vegetation [38,39,40]. These challenges can be addressed with deep learning, but they need a large labelled database to feed the algorithm [41]. While UAV-derived DSM from commercial UAVs have been largely used for small scale terrain mapping and 3D modelling of urban areas [42,43,44], they have been also proved useful for agricultural applications [45,46,47,48,49,50].
As such, this study explored four terrain analysis tools, based on point cloud data or DSM, for the delineation and quantification, in hectares, of damages in field crops due to insurable causes, such as weather events and wildlife attacks. For this study, we focused on events that cause severe physical damages in the plant structure, generating abrupt depressions in the surface of the canopy, from which the plant rarely recovers. This type of damage is the most distressing for farmers and appointed by adjusters for monetary compensation. By the digital 3D reconstruction of the crop canopy, it is possible to detect differences in crop height or dramatic changes in slope that are related to damages. The accuracy of four different analysis methods are here evaluated and a workflow is proposed to deliver a digital product that can be translated into the number of hectares of damaged crop. The proposed workflow can be easily applied to any type of UAS equipped with snapshot cameras or video recorders, which later produce SfM and MVS-based DSMs. Our contribution to the field of precision agriculture is the assembling of a tool from existing methods that has been proven successful as a generic crop damage estimator, independent of the crop type, damage type or growth stage.

2. Materials and Methods

2.1. Dataset Collection

Six sites in Europe and America corresponding to different crop types, damage types and growth stages were selected for this study (Figure 1, Table 1). These sites present different climates and soil conditions, and are covered by dry croplands (wheat and barley) and irrigated croplands (corn). Our dataset comprehends the point clouds, DSMs and orthomosaics from six field crops that were damaged and later imaged with an UAS. Volunteer farmers that suffered damages in their croplands donated the data. A code was assigned to each dataset to keep the confidentiality of the participants (Table 1). One goal of this study is to evaluate the potential of unsupervised remote sensing methods to deal with generic DSMs data. Therefore, no detailed information about the crops and the events that generated the damage was requested to the farmers. Also, no information about the UAV or camera models, or flight conditions was available.
The data were selected to represent different situations that might affect the results of the terrain analysis: crop type, damage type and growth stage. Plant structure, and particularly crop canopy, are related to the crop type (species) and growth stage; the height of the plant, density and distribution, leaf clumping, texture of the canopy, etc., affect the surface of the canopy, and therefore, the DSM. On the other hand, damage type and intensity define the geometry and size of the damage, which is expressed as depressions in the crop canopy. This is also reflected in the DSM. To cover different scenarios, crops with significant different height and vertical geometry were selected: corn vs. barley and wheat. In the same fashion, different damage types were selected to have a variety of situations: logging, wildlife attacks, windstorms and man-made damages. To add heterogeneity to the dataset, each crop in the dataset was in a different growth stage (Table 1).
DroneDeploy [51] (DroneDeploy, San Francisco, CA, US) was used as the platform where the participants of the study uploaded their datasets. The data was nominally generated at 5 cm pixel size, but this fine resolution increases and hinders greatly the computational time. Therefore, the orthomosaics and DSMs were downloaded with a resolution of 10 and 50 cm pixel resolution. In addition, for some of the tested methods, a larger pixel size is recommended to identify the canopy depressions as an entity [52]. The Digital Surface Models were processed and generated by DroneDeploy, which use an algorithm based on SfM [53,54,55,56,57,58,59].

2.2. Workflow to Detect Crop Damages Using UAV-DSM

Certain events, such as windstorms and wildlife attacks, affect the structure of plants in the form of depressions on the crop canopy. These damages cause a dramatic change of elevation between healthy and damaged vegetation, and are visible in a DSM. This motivated the selection of methods that can detect changes in elevation directly from the data. Surface morphology has been defined using DSMs in other disciplines such as topography, hydrology, forestry or urban planning [60,61,62,63]. In this study, we took advantage of existing algorithms for the delineation of crop damages.
However, there can be other elements in the DSM that cause discontinuities or depressions in the crop canopy, such as machinery tracks or water ponds, which can be wrongly classified as damage in terrain analysis. In order to distinguish crop damages from other elements of the landscape, additional processing steps were implemented before and after the terrain analysis, based on spatial characteristics of the different depression types. Overall, we are presenting a workflow in this article, which includes (1) the preprocessing of the input DSMs, (2) terrain analysis and (3) postprocessing of the outcome results (Figure 2).

2.2.1. Terrain Analysis Methods

In this study, four terrain analysis algorithms were tested: slope detection [64], variance analysis [65], geomorphology classification [52,66] and cloth simulation filter [18] (Table 2).
Slope detection and Variance were developed to detect changes in topographic elevation. Geomorphology classification was developed in the field of hydrogeology, to locate areas where water would run off or accumulate in a given landscape. Cloth simulation filter was designed to separate digital surface and ground points from a LiDAR dataset of urban or forested areas.
From an analytical point of view, Geomorphology classification [66,67], Slope [64] and Variance [65] compare each pixel of the scene with the neighbouring pixels to discern the relative elevation in respect to their surroundings. These are known as “kernel-based” methods. On the contrary, Cloth Simulation Filter (CSF) [18] considers the relative position of points and surfaces in reference to all the observed area.
The results of Slope and Variance are rasters in floating data format. In this study, low values correspond to flat or invariant areas (standing vegetation or bent vegetation); high values correspond to areas that present a significant change in elevation (edges of damage).

Slope Detection

Slope was calculated in ArcGIS 10.2. (ESRI, Redlands, CA, US) using the DSMs in raster format. The calculation of slope is based on a trigonometric rule (1) [64]. For clarity in the equations, please follow the naming of pixels of Figure 3:
S l o p e = t a n   θ   ( ( [ d z / d x ] 2 + [ d z / d y ] 2 ) )
where tan is the tangent of the angle θ; x and y are dimensions in the horizontal plane and z is the height. In this case, slope is calculated in a 3D model; therefore, both the relationship between x vs. z and y vs. z are calculated. Considering that we would like to calculate the slope of the central pixel e in a kernel window of 3 × 3 pixels, the change of direction rate is calculated in the x axis as:
[dz/dx] = ((c + 2f + i) − (a + 2d + g)/(8 × x_cellsize)
where c, f and i are the pixels on the right column to e while a, d and g are the pixels on the left column. In the same way, the change of direction rate is calculated in the y axis as:
[dz/dy] = ((g + 2h + i) − (a + 2b + c)/(8 × y_cellsize)
where g, h and i are the pixels on the top row to e while a, b and c are the pixels on the bottom row. As a summary, taking the rate of change in the x and y direction, the slope for the center cell e is calculated using:
s l o p e = ( [ d z d x ] 2 + [ d z d y ] 2 )

Variance Analysis

Variance is a measure of the dispersion of values around the mean. In the case of our study, the variance is calculated on the DSMs, where the values of pixels refer to crop height, and compares pixels within a determined kernel size. First-order metrics operate on the counts (occurrences) of the different digital number (DN) values within a kernel. For the aim of comparison, the same 9 × 9 kernel size was defined for the six tested DSMs.
Variance was calculated in ENVI 5.0. [67] (Exelis Visual Information Solutions, Boulder, CO, US). For each pixel in the image, ENVI obtains the normalised histogram of the values in the kernel that determines the frequency of occurrence of the quantised values within the kernel. Then, it computes the first-order occurrence metrics to calculate the variance [65]:
V a r i a n c e = i = 0 N g 1 ( i M ) 2   x   P ( i )
where i is the pixel value, M is the mean value of all pixels in the kernel, P(i) is the probability of each pixel value, and Ng is the number of distinct grey levels in the quantised image [65]. P(i), or probabilities, are the normalised occurrence values, obtained by dividing the pixel values by the number of values in the kernel [65].

Geomorphological Tools

Geomorphology classification is a module (r.geomorphon) in GRASS 7.8.2. [68] (OsGeo, Beaverton, OR, US) designed for classifying landforms from DSMs in raster format [52,66]. Originally, it was designed for satellite-derived data. The Geomorphon tool analyzes the relationship of a given point regarding its surroundings. Eight orthogonal lines are traced having the observed point in the center. The intersection of the DSM with the lines marks the reference points to determine if the surroundings are higher, lower or at the same level than the observed point.
Form recognition depends on two parameters; search radius and flatness threshold. Small values of radius identifies landforms of a size smaller than the search radius, while landforms having larger sizes are broken down into components. Larger values of search radius allow for simultaneous identification of landforms on variety of sizes in expense of recognizing smaller, second-orders forms. Flatness threshold is the z-values surface that separates the point cloud into ground points and not-ground points. There are two additional parameters: skip radius and flatness distance. Skip radius is used to eliminate impact of small irregularities. On the contrary, flatness distance eliminates the impact of very high distance (in meters) of search radius, which may not detect elevation difference if this is at very far distance. Flatness distance is important especially with low resolution DSMs [52,66].

Cloth Simulation Filter (CSF)

Cloth Simulation Filter (CSF) is a plug-in into CloudCompare [18] to extract ground points in discrete return LiDAR point clouds. The parameters that define the cloth are:
  • Cloth resolution. It refers to the grid size (the unit is the same as the unit of point clouds) of cloth which is used to cover the terrain (xy dimensions). The bigger the cloth resolution is set, the coarser the Digital Terrain Model (DTM) achieved.
  • Classification threshold. It is the distance between points in the z dimension. It refers to a threshold (the unit is the same as the unit of point clouds) to classify the point clouds into ground and nonground parts based on the distances between points and the simulated terrain.
The tool simulates laying a cloth on an inverted Digital Elevation Model (DEM), connecting the points in the cloud that define the edges of elevated objects. The cloth represents the ground points of a DEM, and the rest the objects with certain elevation. The purpose in our study is to use it on DSM of the crops to separate the surface of the healthy crops from the depressions on the canopy, which correspond to damaged crops. The rationale behind it is the opposite of the common use of CFS: in our case, we are not separating elevated objects from ground, but crop canopy surface from depressions.
Table 3 summarises the parameters that were chosen for the different tested terrain analyses. Except for CSF’s cloth resolution, all parameters were the same for all study cases to ensure consistency: a kernel of 9 × 9 pixels was used for the calculation of the Variance; a classification threshold of 0.2 was used for CSF; while for the Geomorphology classification, an outer search radius of 60 and a flatness threshold of 1 were set. Unfortunately, it was not possible to find a logic for cloth resolution that responds to the data characteristics (i.e., pixel resolution, topography, etc.); therefore, the thresholds were selected manually after testing different values and choosing the best result visually. In addition to the parameters shown in Table 3, a threshold to separate damage and no-damage in slope and variance was calculated, using a logistic function [69].

2.2.2. Data Preprocessing

The preprocessing step of the workflow involved (1) the detrending of the data and (2) the projection of the point clouds into Universal Traverse Mercator (UTM), and (3) the conversion of the DSM into a point cloud. Steps (2) and (3) were only required for the CSF method.
The detrending of the digital elevation data (DSM or point cloud) aimed to remove the topography trend and keep only the height of the objects in the surface—crops, in this case. The specific term in precision agriculture for the product of this process is Crop Surface Model (CSM), which refers to the 3D representation of the crop heights, alone. However, for simplicity, we will keep the term “DSM” in the core of the text. There are two main reasons for detrending the data. On the one hand, the Cloth Simulation Filter (CSF) is sensitive to topography. Since crop depressions are less evident than terrain topography, a previous detrending of the DSMs is necessary to locate crop depressions in nonflat terrains. On the other hand, despite Slope, Variance and Geomorphology classification methods are independent of the general topography of the terrain (since the analysis is made at a kernel level), detrended DSMs were also necessary later in the postprocessing step. Therefore, all DSMs were detrended at the beginning of the workflow [63].
The detrending process requires a first step of calculating the trend of the terrain, which is calculated on a grid of points that contain the elevation values. The DSMs were transformed into a point structure, transferring the elevation value to a point in the center of each pixel of the raster. The trend was calculated in ArcMap [69] with a linear regression (polynomial order 6), which removed the topography without eroding the elements of the surface. The last step of the preprocessing involved the subtraction of the trend from the original DSM (Figure 4).
CloudCompare requires the input data to be projected in Universal Transverse Mercator (UTM). Since detrended point cloud was needed for CSF, we derived the points from a CSM that was projected into UTM, in ArcMap.

2.2.3. Data Postprocessing

The postprocessing step was meant to (1) convert data formats for the next steps of the workflow, (2) select the data from the outcomes of the analyses that correspond to damaged vegetation (3) reduce errors in the results of the terrain analysis and (4) calculate the damages areas in hectares (Figure 5).
To start, the tested terrain analysis methods do not deliver “damages”, directly. Variance and Slope detect the edges of the damage but not the damaged area. Geomorphology classification delivers a map of geomorphologies, from which only classes that represent depressions are of interest. Only in the case of CSF, the output of the analysis is directly damaged areas, but the output is not in the appropriate format.
Afterwards, the refined results of the methods were transformed into a polygon shape, to be able to calculate the damaged areas in the crops under study.
For Slope, values close to vertical, 90 degrees, depict the edge of the damage. For Variance, high variability means a change of elevation, therefore the edge of the damage. Once the edge of the damage was delineated by a threshold, it was required that we label which areas are within the edge (damage) and out of the edge (no damage). This classification was made based on the elevation of the crops, calculated from the CSM.
First, it was required to select the threshold value that represents the edge of the damage. We used a logistic function over the histogram of values defined as [70]:
d(t) = 1/(1+e−t)
where d(t) is the logistic function and t is a parameter to be fitted to our data.
The logistic model describes a sigmoid curve separating the values into two groups. The inflexion point of the curve projected on the x-axis is the threshold that separates the edges of damage from the rest of values (standing and damaged vegetation). The result of this operation is a boolean raster where 1 refers to the edge of the damage and 0 to anything else (crop canopy and damaged crops).
The boolean rasters were converted into a polygon vector, and the average height of each polygon was calculated by crossing the polygons with the CSM. This allowed for interpreting which areas were damage (low crop height) or standing vegetation (high crop height).
Again, the logistic function was used to separate polygons with high and low crop height, by the evaluation of histogram of heights. Therefore, the threshold for damage edge and the damaged areas were not selected manually by the user, but the decision came from the data itself.
In the case of the Geomorphology classification, classes 9—valley and 10—depression represent crop damages. For this reason, these two classes were selected from the geomorphology classification map.
CSF separates ground points, from the whole point cloud. In this study, those points are depressions in the canopy surface (crop damage). To calculate the area that those ground points cover, they were transformed into a surface, in the form of raster or polygon.
As expected, the terrain method selected all canopy depressions that were present in the data. To keep only those that correspond to crop damage areas, the postprocessing aimed to identify and remove nondamage areas, such as machinery tracks and ponds.
Several spatial analyses were used to remove errors and enhance the accuracy crop damage detection. For instance, sparse pixels were removed using a majority filter, using a kernel of up to 8 × 8 pixels and a replacement threshold to half. In addition, a ratio area: perimeter was calculated to identify and remove linear structures, which generally corresponded to machinery tracks or field edges. An area: perimeter ratio close to 1 corresponds to rounded shape-forms, while long linear structures present low area: perimeter values because of long perimeters. Also, area: perimeter values under one represent polygons with a very irregular shape. By selecting polygons with an area: perimeter between 1 and 5, damages from machinery tracks were filtered.

2.3. Validation of Data

A validation workflow was designed to generate independent data that can be used to calculate the overall accuracy of the tested methods, as well as the user and producer’s accuracy (Figure 6 and Figure 7).
The orthomosaics of the six study sites were used to manually digitalise polygons of damaged vegetation, following the expert criterion (Figure 8). The owners of the data provided the coordinates of the centroid of a damaged area as an example, which facilitated the photointepretation of the damage class.
For validation, two classes were defined: damaged and nondamaged vegetation. For each orthomosaic, a “damage” polygon vector was drawn by photointerpretation. By default, the “no-damage” class was the rest of the crop that is not damaged. A “no-damage” polygon vector was obtained by clipping a polygon vector that represented the boundary of the crop field with the “damage” polygon (Figure 8). These vectors were considered the ground truth data to validate the results of our study. The “damage” and “no damage” ground-truth data were crossed with the results of the crop damage workflow (from this point on, “classified” layer), to compare the percentage of coincidences and errors (Figure 8). By intersecting the “damage” vector with the results vector, two new layers were obtained: (1) a layer of the damage correctly classified, composed by the area that is coincident in both the ground-truth “damage” vector and the “results” layers; and (2) a layer that depict the errors, which is the area that was wrongly classified as “damage” in the results, which does not coincide with the photointerpreted “damage” vector (Figure 8).
According to the authors of [71], using a sample that represents the 15% of the total population for accuracy assessment leads to a pessimistic evaluation of the method. Therefore, in this study, a 10% of the surveyed area (whole crop field area) was used to evaluate the accuracy of the tested terrain analysis methods. To obtain a random sample from each crop field, the data (ground truth and results) were transformed from polygon vector to raster, and later into a point vector. The 10% of points were randomly selected. A distance between random points of 1 m was set (Figure 7).
The random points layer was intersected with the “damage” layer, the “no-damage” layer, the “matches” layer and the “classified” layer to obtain the 10% of random points of each class. The values of the class were transferred to the random points vector and these were used to calculate the accuracy of the methods.
The expressions used to evaluate the accuracy of the tested methods were defined by the authors of [72] (Figure 9):
Overall accuracy = (no-damage + matches) × 100/n
Users’ accuracy = 100 – ((damagematches) × 100/n)
Producers’ accuracy = 100 − ((classifiedmatches) × 100/n)
where:
  • n: # points at the 10% random points layer for the full surveyed area
  • Damage: # points at the 10% random points layer for the damage layer
  • No-damage: # points at the 10% random points layer for the no-damage layer
  • Classified: # points at the 10% random points layer for the classified layer
  • Matches: # points at the 10% random points layer for the matches layer

3. Results

3.1. Calculation of Thresholds to Separate Damaged and Nondamaged Crops from Preliminary Results

As observed in the resulting rasters from Slope and Variance analyses, these methods depict a dramatic change of values along the edges of crop damages. In other words, the majority of values are low or high, with a low representation of intermediate values. A logistic function was used to separate the high from low values by finding the threshold value that separate edges of damage from the rest of values. These thresholds are shown in Table 4.
As observed in Table 4, the slope threshold delimiting the edges of damage is very similar in each DSM, which is very close to a value of 90 degrees, a vertical slope. On the contrary, the variance thresholds that define the edges of the damage are very variable from one to another DSM.
In the case of CSF, we could not find a statistic to determine the optimal threshold to classify crop canopy depressions. In a preliminary exploration of the output data, it was observed that a classification threshold of 0.2 worked better than any other threshold for all DSMs. This parameter is related to the xy dimension, in other words, the pixel size. We set this as a fixed parameter. However, the cloth resolution parameter showed different results (z value) for the different tested crops, suggesting that the height of the crop (related to the crop type and stage of growth) influences the result of the tool. Namely, the height of the damage and the height of the standing vegetation are related.

3.2. Crop Height Values Thresholds for Damaged and Nondamaged Vegetation

The study cases present different crop heights, as they represent crops of different species and growth stage. Moreover, the intensity of the damage is different in every case and it is difficult to estimate the crop height in those cases. The farmers did not provide information about the mean crop height (nor in standing or damaged vegetation), as one of the purposes of the study was to develop a method to detect damages independently of characteristics of the crop fields.
Therefore, the height was assessed as the relative difference in crop height between standing and bent vegetation, independently of the absolute height values. The evaluation of the histogram allowed to separate damaged and nondamaged vegetation through a logistic function analysis [72]. Table 5 shows the crop height thresholds that allowed for the separation of healthy and damaged vegetation.

3.3. Validation of Terrain Analyses

In order to decide which method is more appropriate for the detection of crop damages in field crops, we calculated the overall accuracy and producers’ accuracy for each study case (Table 6). The maps with all results are shown in the Supplementary Materials.
Since only two classes are explored (damage vs. no damage), “overall accuracy” equals “users’ accuracy”. According to the results, the four tested methods are highly accurate in the classification of crop damage, independently of the crop type, growth stage or damage type. Overall and users’ accuracy are always between 87–99% accuracy, while producers’ accuracy moves between 83–100%. The damage was underestimated in B1L, C1M (except for variance), C2A (except for CSF) and C3W (variance and CSF), and slightly overestimated for the rest of cases (Table 6).
Based on users’ accuracy, Slope and Variance achieved the best results, while the Geomorphology classification performed the worst, for all tested sites (Table 6). The accuracies vary from a 93–99% in Slope and Variance (with the exception of C1M site) to an 87–97% for Geomorphology classification (Table 6). According to the producers’ accuracy, CSF provides the best results for most sites (90–99%), with the Geomorphology classification having the lowest values of the table, between 77–88% (Table 6). Therefore, Slope and Variance are good estimators of overall accuracy of crop damage, although slightly overestimating the damaged area, while CSF produces smaller errors of commission.
B1L presents the highest users’ accuracy and performed well in producers’ accuracy, together with W1W (Table 6). B1L corresponds to a barley cropland affected by logging, during the milking growth stage, in a terrain with heterogeneous topography (Table 1). W2W, C2A and C1M present the lowest users’ and producers’ accuracy (Table 6) for all methods, although still high, between an 87–97% users’ accuracy and 77–90% producers’ accuracy (Table 6). W2W is winter wheat damaged by wind, while C2A and C1M are corn fields damaged by wildlife and humans, respectively. In the three cases, the crops were in an advanced stage of growth (Table 7).
For the selection of the most appropriate method, other factors apart from accuracy were considered, since all methods performed with an accuracy of over 90% (Table 6). The complexity of the process, data volume of intermediate and final files and computing time the workflow, using different terrain analysis methods, was calculated (Table 8). Slope was shown to be the method that consumes a smaller data volume and performs faster. On the other side of the ranking, Geomorphon was the most time consuming process and Variance was the process that produced a larger amount of intermediate data.
In addition, a visual inspection of the results was done per terrain method, for the evaluation of the spatial distribution of errors. Figure 10 shows the results for one of the sites (C1M) across the four tested methods. To see all results for all study sites, please check the Supplementary Materials.
In the visual inspection, it was also observed that the edges of the damage were not so clear in W2W, C1M and C2A, which presented a lower accuracy for all terrain analysis methods. In these cases, the vegetation lays within the damaged areas, being the limits between healthy and damaged vegetation not as abrupt as in other cases. On the other hand, B1L, W1W and C3W—which present the highest crop damage detection accuracies—showed defined damages, with it being possible to see the ground in the damaged areas. Therefore, we can say that the efficiency of the methods depends on the severity of the damage.
In addition, CSF was penalised in the selection of the best method because it required certain supervision from the user, since we did not find a way to calculate the classification threshold and cloth resolution parameters statistically from the data.
Considering that Slope was the fastest and simplest method (Table 8), the one that performed the best with an average overall accuracy of 96.9% among all tested sites (Table 6), and the method that presented the best spatial performance (Figure 10), this is the method that is best recommended for the detection of crop damages from UAV derived DSMs.

4. Discussion

Agriculture is the most important economic sector worldwide. At the same time, it is a risky activity due to many factors that are outside of human control or difficult to predict, such as weather and market fluctuations. The incorporation of remote sensing to agriculture management is largely improving the efficiency of the agrobusiness and facilitating farmers and other stakeholders’ work.
Farmers and agronomists are incorporating UAVs into their daily crop management. However, most users use only some UAV functions, such as videos or single-shots of the entire cropland at a high elevation and low pixel resolution. Also, most users use basic RGB cameras and light low-cost UAS that do not allow sophisticated spectral analysis, but provide DSMs.
There are numerous studies that use DSM analysis for precision agriculture [45,46,47,49]; however, the use of DSM for crop damage estimation has been scarcely explored [29,30,31,32,40,73,74,75,76]. Michez et al. [77] proposed a method to detect damages in corn fields caused by wildlife, using UAV-based DSMs and height thresholds. In a similar fashion, Kuzelka and Surovy [78] used classification algorithms of crop heights in UAV-DSM to locate damage in wheat generated by wild boars, using accurate height estimations at the field taken with GNSS (Global Navigation Satellite System) for validating their results. Stanton et al. [79] reported that crop height extracted from SfM DSM and NDVI from UAV products were related to stress caused by aphid plagues. Similarly, Yang et al. [80] used a hybrid method of spectral and DSM data to classify logging damage in rice fields. As we see, the existing literature only evaluates study cases or use DSM data as a proxy. Our contribution to the sector is a workflow that can potentially be used in any case of structural crop damage, crop type and growth stage, in field crops that present significant damage intensity.
The goal of the present study is to provide a versatile and unsupervised tool, based on DSMs products and Geographic Information Systems analysis, for assessing crop damage after a disturbance. These events (i.e., wildlife, windstorms, fires) bend the vegetation or, in case of a very intense event, remove plant stands from the terrain. In effect, the damaged vegetation is relatively lower than the healthy vegetation and the damage is characterised as a depression or discontinuity in the crop canopy. These depressions can be modelled in Digital Surface Models (DSM) of the crop canopy and be used to detect damage in a terrain model analysis. The presented workflow delivers an estimation of damage in area units. Eventually, if the height of the crop is known, the damage can be estimated as a volume metric. If the yield and plant metrics from other years are known, yield loss can be inferred by regression analysis.
Since we did not find any terrain analysis developed specifically for agriculture, we used algorithms for other applications—such as hydrology, topography and forestry—for the analysis of crop canopies. Four terrain analysis methods were tested: Slope detection, Variance analysis, Geomorphology classification and Cloth Simulation Filter. A selection of six croplands located in different locations in America and Europe were used, representing different crop types (C: corn, W: wheat and B: barley), damage types (W: wind, A: animals, L: logging, M: Man-made) and growth stages, in order to evaluate the influence of those parameters in the accuracy of the tested terrain analysis methods. The results of this study revealed that all tested methods are very accurate detecting crop damage from UAV DSMs. The worst results are above the 77% (for Geomorphology classification), while some of the methods almost reached the 100% accuracy (98% for Slope and Variance analyses). Our results also showed independence of our workflow to environmental and geographic factors, such as topography, soil type and irrigation management.
One of the challenges of this project was to create a tool that is able to analyze any generic DSM, independently of the UAV and camera types, in order to support a broad audience of farmers, agronomists and other agri-businesses’ stakeholders. Also, we wanted our workflow to be independent of the observed target, covering large number of situations, such as different crop types, damage types and growth stage. From a technical point of view, one important achievement is the use of unsupervised methods. Other authors, such as Li et al. [41], used deep learning techniques to detect tree plantations in UAV images successfully. Deep learning overcomes problems such as projected shadows and colour differences in UAV imagery, at the cost of a large labelled database. A limitation of this study is that the workflow only works on field crops (i.e., wheat, barley, corn or canola). Damage in row crops would require different methods [74], which are based in image or 3D artificial intelligence algorithms [41]. The proposed workflow has been proved to be a versatile tool that delivers accurate crop damage estimations independently of the crop type, damage type and growth stage.
A major limitation of our study is that the presented workflow can only detect severe damage in crops. In this study, the damage intensity has not been specifically defined, among other reasons, because of the challenge of defining the term from a technical perspective. However, from the dataset it was possible to observe that the damage was more evident in some cases than in others. The orthomosaics allowed to observe that the workflow was more effective in those crops where the vegetation was completely bent than in the cases where only few plant stands were broken or partially bent. For instance, only Slope values above 89.9 degrees produced a good separation between damaged and not damaged vegetation. The same was observed for the rest of the methods: for Variance, only large variances in elevation; for Geomorphology tools, only large; and for CSF, only points in the clouds that were clearly separated from the surrounding points, were interpreted as damaged vegetation. That implies that damage at leaf level, such as that generated by hail, frost or diseases cannot be detected with this method. For damages that do not affect the plant structure, other methods based on spectral properties of the plant (i.e., using multispectral or hyperspectral sensors) can be useful [74,75]. However, rather than being a limitation, small damages are unlikely to cause losses in yield; therefore, the most significant damages (structural damages) are detected with our method.
A significant constraint of the workflow is the requirement for quality DSM as input data. Originally, around 20 farmers signed up as volunteers for this study. However, most DSMs had to be discarded because of low quality, which turned into artefacts that were interpreted wrongly as damage.
Since all terrain methods perform really well, the selection of the best terrain analysis was also evaluated taking into account technical factors, such as data volume, processing time, the number of processing steps and the possibility of a statistical selection of thresholds and parameters. The data volume generated was similar in all methods except for Variance, which was more than 3 times higher. CSF requires more steps for preprocessing and postprocessing; plus, it was not possible to select the appropriate parameters unsupervised. Therefore, Slope analysis was selected as the most accurate and efficient method for the detection of crop damage.

5. Conclusions

The present study intends to help farmers, agronomists and professionals of the agrobusiness in improving their field management and have accurate and objective estimations of damage to be used on insurance claims. In a world of a changing climate where it is expected to experience a rise in plagues, fires, floods, droughts and storms, unsupervised, accurate and fast estimations of crop damages will simplify the task of adjusters and farmers.
The current study explored existing terrain analysis methods to detect crop damage from Unmanned Aerial Vehicles (UAV) Digital Surface Models (DSMs). Datasets from different locations in Europe and America corresponding to different crop types, crop damages, growth stages and damage intensities were tested. This method was designed for field crops (i.e., barley, corn, wheat, rice, etc.) and not for row crops. Four existing terrain analysis methods were tested: Slope detection, Variance analysis, Cloth Simulation Filter and Geomorphology classification. The proposed workflow did not require training data nor expert knowledge, but a posteriori refinement of the results was needed to remove machinery tracks and other sources of noise. The results of our study revealed that all methods are able to detect crop damage in our tested dataset with an accuracy above 90%, and that Slope and Variance were the methods that presented a higher overall accuracy, around 98%.
The presented workflow proved successful in terms of different crop type, growth stage, and damage type. However, our workflow is only effective for field crops data and severe intensity of damage.
Future research will focus on the implementation of the workflow into a programming language and the estimation of crop damage in biomass volume or yield by intersecting the information retrieved from this study (hectares of damaged crops) with information about the absolute height of the observed crops.

Supplementary Materials

The following are available online at https://0-www-mdpi-com.brum.beds.ac.uk/2072-4292/12/6/981/s1, Figure S1. Results of the six terrain analysis and post-processing workflow on dataset W1W (shown in yellow). In red, the damages detected from photointerpretation. UTM WGS84 Zone 34N, Figure S2. Results of the six terrain analysis and post-processing workflow on dataset C1M (shown in yellow). In red, the damages detected from photointerpretation. UTM WGS84 Zone 20S, Figure S3. Results of the six terrain analysis and post-processing workflow on dataset B1L (shown in yellow). In red, the damages detected from photointerpretation. UTM WGS84 Zone 14N, Figure S4. Results of the six terrain analysis and post-processing workflow on dataset W2W (shown in yellow). In red, the damages detected from photointerpretation. UTM WGS84 Zone 30N, Figure S5. Results of the six terrain analysis and post-processing workflow on dataset C2A (shown in yellow). In red, the damages detected from photointerpretation. UTM WGS84 Zone 33N, Figure S6. Results of the six terrain analysis and post-processing workflow on dataset C3W (shown in yellow). In red, the damages detected from photointerpretation. UTM WGS84 Zone 16N.

Author Contributions

V.E.G.M. contributed to the conceptualization, funding acquisition, data curation, methodology, validation, and writing (original draft and review and editing). C.R. contributed to the conceptualization, funding acquisition, methodology, project administration, resources, software provision, and supervision. G.A.S.-A. contributed to the funding acquisition, project administration, resources, software provision, supervision and writing (review). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Mitacs Accelerate program (www.mitacs.ca/en/programs/accelerate) (Grant No 08301), with the collaboration of the University of Alberta and Skymatics Ltd.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. FAO; IFAD; UNICEF; WFP; WHO. The State of Food Security and Nutrition in the World 2018: Building Climate Resilience for Food Security and Nutrition; FAO: Rome, Italy, 2018. [Google Scholar]
  2. National Foreign Assessment Center (U.S.). The World Factbook; Central Intelligence Agency: Washington, DC, USA, 2017.
  3. Vermeulen, S.J.; Aggarwal, P.K.; Ainslie, A.; Angelone, C.; Campbell, B.M.; Challinor, A.J.; Lau, C. Options for support to agriculture and food security under climate change. Environ. Sci. Policy 2012, 15, 136–144. [Google Scholar] [CrossRef]
  4. Morton, J.F. The impact of climate change on smallholder and subsistence agriculture. Proc. Natl. Acad. Sci. USA 2007, 104, 19680–19685. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Parry, M.L. Climate Change and World Agriculture; Earthscan Publications Ltd.: London, UK, 1990. [Google Scholar]
  6. Steffen, W.; Rockström, J.; Richardson, K.; Lenton, T.M.; Folke, C.; Liverman, D.; Summerhayes, C.P.; Barnosky, A.D.; Cornell, S.E.; Crucifix, M.; et al. Trajectories of the Earth System in the Anthropocene. Proc. Natl. Acad. Sci. USA 2018, 115, 8252–8259. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Allen, M.; Dube, O.P.; Solecki, W.; Hoegh-Guldberg, F.O.; Jacob, D.; Roy, M.J.; Tschakert, P. Global Warming of 1.5 °C; IPPC: Rome, Italy, 2018. [Google Scholar]
  8. FAO; IFAD; UNICEF; WFP; WHO. The State of Food Security and Nutrition in the World 2017: Building Resilience for Peace and Food Security; FAO: Rome, Italy, 2017. [Google Scholar]
  9. Babst, F.; Bouriaud, O.; Poulter, B.; Trouet, V.; Girardin, M.P.; Frank, D.C. Twentieth century redistribution in climatic drivers of global tree growth. Sci. Adv. 2019, 5, eaat4313. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Unkovich, M.; Baldock, J.; Forbes, M. Variability in harvest index of grain crops and potential significance for carbon accounting: Examples from Australian agriculture. Adv. Agron. 2010, 105, 173–219. [Google Scholar]
  11. Klein, T.; Holzkämper, A.; Calanca, P.; Fuhrer, J. Adaptation options under climate change for multifunctional agriculture: A simulation study for western Switzerland. Reg. Environ. Chang. 2014, 14, 167–184. [Google Scholar] [CrossRef] [Green Version]
  12. Das, J.; Cross, G.; Qu, C.; Makineni, A.; Tokekar, P.; Mulgaonkar, Y.; Kumar, V. Devices, systems, and methods for automated monitoring enabling precision agriculture. In Proceedings of the IEEE Int. Conference on Automation Science and Engineering (CASE), Gothenburg, Sweden, 24–25 August 2015. [Google Scholar]
  13. Zhang, C.; Kovacs, J.M. The application of small unmanned aerial systems for precision agriculture: A review. Precis. Agric. 2012, 13, 693–712. [Google Scholar] [CrossRef]
  14. Primicerio, J.; Di Gennaro, S.F.; Fiorillo, E.; Genesio, L.; Lugato, E.; Matese, A.; Vaccari, F.P. A flexible unmanned aerial vehicle for precision agriculture. Precis. Agric. 2012, 13, 517–523. [Google Scholar] [CrossRef]
  15. McBratney, A.; Whelan, B.; Ancev, T. Future Directions of Precision Agriculture. Precis. Agric. 2005, 6, 7–23. [Google Scholar] [CrossRef]
  16. Yang, C.; Everitt, J.H.; Du, Q.; Luo, B.; Chanussot, J. Using high-resolution airborne and satellite imagery to assess crop growth and yield variability for precision agriculture. IEEE 2013, 101, 582–592. [Google Scholar] [CrossRef]
  17. Houborg, R.; McCabe, M.F. High-resolution NDVI from Planet’s constellation of earth observing nano-satellites: A new data source for precision agriculture. Remote Sens. 2016, 8, 768. [Google Scholar] [CrossRef] [Green Version]
  18. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  19. Gevaert, C.M.; Suomalainen, J.; Tang, J.; Kooistra, L. Generation of spectral–temporal response surfaces by combining multispectral satellite and hyperspectral UAV imagery for precision agriculture applications. IEEE JSTAR 2015, 8, 3140–3146. [Google Scholar] [CrossRef]
  20. Elarab, M.; Ticlavilca, A.M.; Torres-Rua, A.F.; Maslova, I.; McKee, M. Estimating chlorophyll with thermal and broadband multispectral high resolution imagery from an unmanned aerial system using relevance vector machines for precision agriculture. Int. J. Appl. Earth Obs. 2015, 43, 32–42. [Google Scholar] [CrossRef] [Green Version]
  21. Candiago, S.; Remondino, F.; De Giglio, M.; Dubbini, M.; Gattelli, M. Evaluating multispectral images and vegetation indices for precision farming applications from UAV images. Remote Sens. 2015, 7, 4026–4047. [Google Scholar] [CrossRef] [Green Version]
  22. Park, B.; Lu, R. Hyperspectral Imaging Technology in Food and Agriculture; Springer: New York, NY, USA, 2015. [Google Scholar]
  23. Honkavaara, E.; Kaivosoja, J.; Mäkynen, J.; Pellikka, I.; Pesonen, L.; Saari, H.; Salo, H.; Hakala, T.; Marklelin, L.; Rosnell, T. Hyperspectral reflectance signatures and point clouds for precision agriculture by light weight UAV imaging system. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 7, 353–358. [Google Scholar]
  24. Whiting, M.L.; Ustin, S.L.; Zarco-Tejada, P.; Palacios-Orueta, A.; Vanderbilt, V.C. Hyperspectral mapping of crop and soils for precision agriculture. In Proceedings of the Remote Sensing and Modeling of Ecosystems for Sustainability III, San Diego, CA, USA, 27 September 2006. [Google Scholar]
  25. Khanal, S.; Fulton, J.; Shearer, S. An overview of current and potential applications of thermal remote sensing in precision agriculture. Comput. Electron. Agric. 2017, 139, 22–32. [Google Scholar] [CrossRef]
  26. Baggio, A. Wireless sensor networks in precision agriculture. In Proceedings of the ACM Workshop on Real-World Wireless Sensor Networks (REALWSN 2005), Stockholm, Sweden, 20–21 June 2005. [Google Scholar]
  27. Wang, N.; Zhang, N.; Wang, M. Wireless sensors in agriculture and food industry—Recent development and future perspective. Comput. Electron. Agric. 2006, 50, 1–14. [Google Scholar] [CrossRef]
  28. Camilli, A.; Cugnasca, C.E.; Saraiva, A.M.; Hirakawa, A.R.; Corrêa, P.L. From wireless sensors to field mapping: Anatomy of an application for precision agriculture. Comput. Electron. Agric. 2007, 58, 25–36. [Google Scholar] [CrossRef]
  29. Brisco, B.; Brown, R.J. Drought Stress Evaluation in Agricultural Crops Using C-HH SAR Data. Can. J. Remote Sens. 1990, 16, 39–44. [Google Scholar] [CrossRef]
  30. Senthilkumar, T.; Singh, C.B.; Jayas, D.S.; White, N.D.G. Detection of fungal infection in canola using near-infrared hyperspectral imaging. J. Agric. Eng. 2012, 49, 21–27. [Google Scholar]
  31. Tapia-Silva, F.O.; Itzerott, S.; Foerster, S.; Kuhlmann, B.; Kreibich, H. Estimation of flood losses to agricultural crops using remote sensing. Phys. Chem. Earth Parts 2011, 36, 253–265. [Google Scholar] [CrossRef]
  32. Korontzi, S.; McCarty, J.; Loboda, T.; Kumar, S.; Justice, C. Global distribution of agricultural fires in croplands from 3 years of Moderate Resolution Imaging Spectroradiometer (MODIS) data. Glob. Biogeochem. Cycles 2006, 20. [Google Scholar] [CrossRef]
  33. Sanchez-Azofeifa, A.; Antonio Guzmán, J.; Campos, C.A.; Castro, S.; Garcia-Millan, V.; Nightingale, J.; Rankine, C. Twenty? first century remote sensing technologies are revolutionizing the study of tropical forests. Biotropica 2017, 49, 604–619. [Google Scholar] [CrossRef]
  34. Grenzdörffer, G.J.; Engel, A.; Teichert, B. The photogrammetric potential of low-cost UAVs in forestry and agriculture. Int. Arch. ISPRS 2008, 31, 1207–1214. [Google Scholar]
  35. Gómez-Candón, D.; De Castro, A.I.; López-Granados, F. Assessing the accuracy of mosaics from unmanned aerial vehicle (UAV) imagery for precision agriculture purposes in wheat. Precis. Agric. 2014, 15, 44–56. [Google Scholar] [CrossRef] [Green Version]
  36. Xiang, H.; Tian, L. Development of a low-cost agricultural remote sensing system based on an autonomous unmanned aerial vehicle (UAV). Biosyst. Eng. 2011, 108, 174–190. [Google Scholar] [CrossRef]
  37. Harwin, S.J. Multi-View Stereopsis (MVS) from an Unmanned Aerial Vehicle (UAV) for Natural Landform Mapping. Ph.D. Thesis, University of Tasmania, Hobart, Australia, 2015. [Google Scholar]
  38. Daponte, P.; De Vito, L.; Glielmo, L.; Iannelli, L.; Liuzza, D.; Picariello, F.; Silano, G. A review on the use of drones for precision agriculture. In Proceedings of the IOP Conference Series: Earth and Environmental Science, Ancona, Italy, 1–2 October 2018. [Google Scholar]
  39. Stafford, J.V.; Werner, A. Precision Agriculture; Wageningen Academic Pub.: Wageningen, The Netherlands, 2003. [Google Scholar]
  40. Avtar, R.; Watanabe, T. Unmanned Aerial Vehicle: Applications in Agriculture and Environment; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  41. Li, W.; Fu, H.; Yu, L.; Cracknell, A. Deep learning based oil palm tree detection and counting for high-resolution remote sensing images. Remote Sens. 2017, 9, 22. [Google Scholar] [CrossRef] [Green Version]
  42. Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A.M.; Noardo, F.; Spanò, A. UAV Photogrammetry with Oblique Images: First Analysis on Data Acquisition and Processing. Int. Arch. ISPRS 2016, 41. [Google Scholar]
  43. Stefanik, K.V.; Gassaway, J.C.; Kochersberger, K.; Abbott, A.L. UAV-based stereo vision for rapid aerial terrain mapping. GIScience Remote Sens. 2011, 48, 24–49. [Google Scholar] [CrossRef]
  44. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS 2014, 92, 79–97. [Google Scholar] [CrossRef] [Green Version]
  45. Schirrmann, M.; Giebel, A.; Gleiniger, F.; Pflanz, M.; Lentschke, J.; Dammer, K.H. Monitoring agronomic parameters of winter wheat crops with low-cost UAV imagery. Remote Sens. 2016, 8, 706. [Google Scholar] [CrossRef] [Green Version]
  46. Christiansen, M.; Laursen, M.; Jørgensen, R.; Skovsen, S.; Gislum, R. Designing and testing a UAV mapping system for agricultural field surveying. Sensors 2017, 17, 2703. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Lu, N.; Zhou, J.; Han, Z.; Li, D.; Cao, Q.; Yao, X.; Cheng, T. Improved estimation of aboveground biomass in wheat from RGB imagery and point cloud data acquired with a low-cost unmanned aerial vehicle system. Plant Methods 2019, 15, 17. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Wilke, N.; Siegmann, B.; Klingbeil, L.; Burkart, A.; Kraska, T.; Muller, O.; Rascher, U. Quantifying Lodging Percentage and Lodging Severity Using a UAV-Based Canopy Height Model Combined with an Objective Threshold Approach. Remote Sens. 2019, 11, 515. [Google Scholar] [CrossRef] [Green Version]
  49. Hunt, E.R., Jr.; Daughtry, C.S. What good are unmanned aircraft systems for agricultural remote sensing and precision agriculture? Int. J. Remote Sens. 2018, 39, 5345–5376. [Google Scholar] [CrossRef] [Green Version]
  50. Berry, P.M.; Sterling, M.; Baker, C.J.; Spink, J.; Sparkes, D.L. A calibrated model of wheat lodging compared with field measurements. Agric. For. Meteorol. 2003, 119, 167–180. [Google Scholar] [CrossRef]
  51. Winn, M.; Millin, J.J. System and Methods for Hosting Missions with Unmanned Aerial Vehicles. U.S. Patent Application 14/844,841, 23 March 2017. [Google Scholar]
  52. Jasiewicz, J.; Stepinski, T. Geomorphons—A pattern recognition approach to classification and mapping of landforms. Geomorphology 2013, 182, 147–156. [Google Scholar] [CrossRef]
  53. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. Structure-from-Motion’photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  54. Szeliski, R. Computer Vision: Algorithms and Applications; Springer Science Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  55. Wolf, P.R.; Dewitt, B.A. Elements of Photogrammetry: With Applications in GIS; McGraw-Hill: New York, NY, USA, 2000. [Google Scholar]
  56. McGlone, C.; Mikhail, E.; Bethel, J. Manual of Photogrammetry; ASPRS: Bethesda, MD, USA, 1980. [Google Scholar]
  57. Ullman, S. The interpretation of structure from motion. Biol. Sci. 1979, 203, 405–426. [Google Scholar]
  58. Agisoft, LLC. Agisoft PhotoScan User Manual: Professional Edition. 2018. Available online: https://www.agisoft.com/pdf/photoscan-pro_1_4_en.pdf (accessed on 11 March 2020).
  59. Pix4D SA. Pix4Dmapper User Manual 3.1; Pix4D SA: Lausanne, Switzerland, 2016. [Google Scholar]
  60. Jenson, S.K. Applications of hydrologic information automatically extracted from digital elevation models. Hydrol. Process. 1991, 5, 31–44. [Google Scholar] [CrossRef]
  61. Lane, S.N.; James, T.D.; Crowell, M.D. Application of digital photogrammetry to complex topography for geomorphological research. Photogramm. Rec. 2000, 16, 793–821. [Google Scholar] [CrossRef]
  62. Liu, X. Airborne LiDAR for DEM generation: Some critical issues. Prog. Phys. Geogr. 2008, 32, 31–49. [Google Scholar]
  63. Hardy, R.L. Multiquadric equations of topography and other irregular surfaces. J. Geophys. Res. 1971, 76, 1905–1915. [Google Scholar] [CrossRef]
  64. Burrough, P.A.; McDonell, R.A. Principles of Geographical Information Systems; Oxford University Press: New York, NY, USA, 1998. [Google Scholar]
  65. Anys, H.; Bannari, A.; He, D.C.; Morin, D. Texture analysis for the mapping of urban areas using airborne MEIS-II images. In Proceedings of the First, Int. Airborne Remote Sensing Conference and Exhibition, Strasbourg, France, 12–15 September 1994. [Google Scholar]
  66. Stepinski, T.; Jasiewicz, J. Geomorphons—A new approach to classification of landform. Geomorphology 2011, 182, 147–156. [Google Scholar]
  67. van Rees, E. Exelis Visual Information Solutions. GeoInformatics 2013, 16, 24. [Google Scholar]
  68. GRASS Development Team. Geographic Resources Analysis Support System (GRASS 7) Programmer’s Manual. Open Source Geospatial Foundation Project. 2017. Available online: http://grass.osgeo.org/programming7/ (accessed on 12 March 2019).
  69. Cox, D.R. The regression analysis of binary sequences (with discussion). J. R. Stat. Soc. B 1958, 20, 215–242. [Google Scholar]
  70. ESRI. ArcGIS Desktop: Release 10; Environmental Systems Research Institute: Redlands, CA, USA, 2011. [Google Scholar]
  71. Foody, G.M. Harshness in image classification accuracy assessment. Int. J. Remote Sens. 2008, 29, 3137–3158. [Google Scholar] [CrossRef] [Green Version]
  72. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Applications; Lewis Publishers: Boca Raton, FL, USA, 1999. [Google Scholar]
  73. Walker, S.H.; Duncan, D.B. Estimation of the probability of an event as a function of several independent variables. Biometrika 1967, 54, 167–178. [Google Scholar] [CrossRef]
  74. Zhang, X.; Li, X.; Zhang, B.; Zhou, J.; Tian, G.; Xiong, Y.; Gu, B. Automated robust crop-row detection in maize fields based on position clustering algorithm and shortest path method. Comput. Electron. Agric. 2018, 154, 165–175. [Google Scholar] [CrossRef]
  75. Young, F.R.; Apan, A.; Chandler, O. Crop hail damage: Insurance loss assessment using remote sensing. Mapp. Resour. Manag. Proc. RSPSoc2004 2004. [Google Scholar]
  76. Peters, A.J.; Griffin, S.C.; Viña, A.; Ji, L. Use of remotely sensed data for assessing crop hail damage. Photogramm. Eng. Remote Sens. 2000, 66, 1349–1355. [Google Scholar]
  77. Michez, A.; Morelle, K.; Lehaire, F.; Widar, J.; Authelet, M.; Vermeulen, C.; Lejeune, P. Use of unmanned aerial system to assess wildlife (Sus scrofa) damage to crops (Zea mays). J. Unmanned Veh. Syst. 2016, 4, 266–275. [Google Scholar] [CrossRef] [Green Version]
  78. Kuželka, K.; Surový, P. Automatic detection and quantification of wild game crop damage using an unmanned aerial vehicle (UAV) equipped with an optical sensor payload: A case study in wheat. Eur. J. Remote Sens. 2018, 51, 241–250. [Google Scholar] [CrossRef]
  79. Stanton, C.; Starek, M.J.; Elliott, N.; Brewer, M.; Maeda, M.M.; Chu, T. Unmanned aircraft system-derived crop height and normalized difference vegetation index metrics for sorghum yield and aphid stress assessment. J. Appl. Remote Sens. 2017, 11, 026035. [Google Scholar] [CrossRef] [Green Version]
  80. Yang, M.D.; Huang, K.S.; Kuo, Y.H.; Tsai, H.; Lin, L.M. Spatial and spectral hybrid image classification for rice logging assessment through UAV imagery. Remote Sens. 2017, 9, 583. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Location of study sites.
Figure 1. Location of study sites.
Remotesensing 12 00981 g001
Figure 2. Full crop damage detection process, including pre and postprocessing workflows and 3D analysis. Slope analysis on study site “W1W” as example. See Table 1 for description of the site.
Figure 2. Full crop damage detection process, including pre and postprocessing workflows and 3D analysis. Slope analysis on study site “W1W” as example. See Table 1 for description of the site.
Remotesensing 12 00981 g002
Figure 3. Trigonometric fundamentals of slope calculation. Modified from [68].
Figure 3. Trigonometric fundamentals of slope calculation. Modified from [68].
Remotesensing 12 00981 g003
Figure 4. Calculation of topography trend using linear polynomial factor 6 and detrending of Digital Surface Models (DSM).
Figure 4. Calculation of topography trend using linear polynomial factor 6 and detrending of Digital Surface Models (DSM).
Remotesensing 12 00981 g004
Figure 5. Postprocessing workflow. Circles represent data, diamonds represent processes.
Figure 5. Postprocessing workflow. Circles represent data, diamonds represent processes.
Remotesensing 12 00981 g005
Figure 6. Workflow for the validation process of final results of 3D analysis.
Figure 6. Workflow for the validation process of final results of 3D analysis.
Remotesensing 12 00981 g006
Figure 7. Validation workflow in images. Slope analysis on W1W as example.
Figure 7. Validation workflow in images. Slope analysis on W1W as example.
Remotesensing 12 00981 g007
Figure 8. Photointerpretation and digitalisation of severe damages by expert observation. Slope analysis on study site “W1W” as example. See description of the site on Table 1.
Figure 8. Photointerpretation and digitalisation of severe damages by expert observation. Slope analysis on study site “W1W” as example. See description of the site on Table 1.
Remotesensing 12 00981 g008
Figure 9. Calculation of matches by the intersection of photointerpreted damages (reference) and final results of the terrain analysis (classified). Slope analysis on study site “W1W” as example. See description of the site on Table 1.
Figure 9. Calculation of matches by the intersection of photointerpreted damages (reference) and final results of the terrain analysis (classified). Slope analysis on study site “W1W” as example. See description of the site on Table 1.
Remotesensing 12 00981 g009
Figure 10. Final crop damage detection for C1M at the end of the workflow, using four different terrain analysis methods (Slope, Variance, Cloth Simulation Filter (CSF) and Geomorphology classification).
Figure 10. Final crop damage detection for C1M at the end of the workflow, using four different terrain analysis methods (Slope, Variance, Cloth Simulation Filter (CSF) and Geomorphology classification).
Remotesensing 12 00981 g010
Table 1. Study sites described by crop type, crop damage and growth stage.
Table 1. Study sites described by crop type, crop damage and growth stage.
Dataset Code *Crop SpeciesDamage TypeGrowth Stage **Area (ha)Site
B1LBarleyLogging73. Milking79.4Canada
W1WWheatWind85. Ripening35.4Hungary
W2WWheatWind89. Late ripening3.9UK
C1MCornMan-made69. Flowering4.1Brazil
C2ACornWildlife31. Jointing7.1Hungary
C3WCornWind49. Booting34.7Chicago, US
* Code naming: Crop type—sample #—Damage Type; Crop Type: B: barley, C: corn, W: wheat; Damage type: L: Logging, M: man-made, A: animals, W: wind. ** According to BBHC growth scale.
Table 2. General description of terrain analysis methods in this study.
Table 2. General description of terrain analysis methods in this study.
Slope DetectionVariance AnalysisCloth SimulationGeomorphology Classification
SoftwareArcMapENVICloud CompareGRASS
File formatRasterRasterPoint cloudRaster
Statistics calculationKernel-basedKernel-basedPoint cloudKernel-based
TopographyAnyAnyDetrendedAny
OutputEdge of damageEdge of damageDepression areaDepression area
Output formatRasterRasterPointsRaster
Table 3. Parameters defined for each of the 3D analysis methods. All units are “pixels”.
Table 3. Parameters defined for each of the 3D analysis methods. All units are “pixels”.
VarianceCloth Simulation Filter (CSF)Geomorphology Classification
Kernel SizeClassification ThresholdCloth ResolutionOuter Search RadiusFlatness Threshold
B1L9 × 90.23.0601
W1W9 × 90.22.0601
W2W9 × 90.22.0601
C1M9 × 90.24.0601
C2A9 × 90.21.0601
C3W9 × 90.22.0601
Table 4. Thresholds to discriminate edges of damage for slope and variance.
Table 4. Thresholds to discriminate edges of damage for slope and variance.
Study SiteSlopeVariance
Slope DegreesDecimal Degrees
B1L89.990.0005
W1W89.9940.1
W2W89.990.01
C1M89.99850.02
C2A89.9980.005
C3W89.9940.0025
Table 5. Crop height thresholds to discriminate damaged from nondamaged polygons.
Table 5. Crop height thresholds to discriminate damaged from nondamaged polygons.
Study SiteSlopeVarianceCSFGeomorphology Classification
B1L−1.0000−2−20.0−8
W1W100–1500925900.0900
W2W3Any except 0NoneAny except 0
C1M0–4000−5−40.0−125
C2A0−3None−5
C3W15Any except 1None0
Table 6. Accuracy assessment of the tested terrain analysis methods.
Table 6. Accuracy assessment of the tested terrain analysis methods.
Terrain Analysis Method B1LW1WW2WC1MC2AC3WAverage
Number of Samples189,90090,50011,900770018,60052,600
SlopeOverall accuracy98.996.697.397.095.795.896.9
User’s accuracy98.996.697.397.095.795.796.9
Producer’s accuracy97.697.780.997.683.195.792.1
VarianceOverall accuracy98.998.088.696.292.695.895.0
User’s accuracy98.998.088.596.292.695.895.0
Producer’s accuracy97.296.589.297.284.994.193.2
CSFOverall accuracy98.097.095.596.394.492.695.6
User’s accuracy98.096.995.596.394.492.695.6
Producer’s accuracy99.194.490.099.194.998.195.9
Geomorphology classificationOverall accuracy97.595.287.096.790.288.592.5
User’s accuracy97.595.287.096.890.288.592.5
Producer’s accuracy95.896.177.899.286.088.290.5
Table 7. Hectares and volume of damage and relative errors.
Table 7. Hectares and volume of damage and relative errors.
Terrain Analysis MethodStudy SiteB1LW1WW2WC1MC2AC3W
Reference Area (m2)26,060.725,998.480372711.220,500.735,192.7
SlopeArea (m2)25,774.425,117.97818.92629.219,616.733,698.7
Error (%)1.13.42.73.04.34.2
VarianceArea (m2)22,770.925,483.37117.82608.718,990.733,703.4
Error (%)1.12.011.43.87.44.2
CSFArea (m2)25,541.325,205.87674.32610.919,355.532,580.0
Error (%)2.03.04.53.75.67.4
Geomorphology classificationArea (m2)25,144.524,753.96995.62622.818,501.331,148.2
Error (%)2.54.813.03.39.811.5
Table 8. Processing steps, data volume (value on the left, in Mb) and processing time (value on the right, in sec.) for each terrain analysis method. Values are average for all six study sites. (NA: Not Applied; NAcc: Not Accounted).
Table 8. Processing steps, data volume (value on the left, in Mb) and processing time (value on the right, in sec.) for each terrain analysis method. Values are average for all six study sites. (NA: Not Applied; NAcc: Not Accounted).
Processing StepsSlopeVarianceCSFGeomorphon
DSM Preprocessing
DSM converted to point5.2 / 1.55.2 / 1.55.2 / 1.55.2 / 1.5
Trend calculation0.6 / 3.90.6 / 3.90.6 / 3.90.6 / 3.9
DSM detrending0.6 / 0.60.6 / 0.60.6 / 0.60.6 / 0.6
DSM projection into UTMNANA0.6 / 6.6NA
Conversion of re-projected UTM raster into pointsNANA5.2 / 1.2NA
Analysis
Terrain analysis preliminary results 0.2 / 0.378.4 / 1.0 0.2 / 0.2 0.06 / 15.0
Postprocessing
Conversion of preliminary results into rasterNANA0.3 / 0.3NA
Generation of a mask for damaged pixels0.005 / 0.0620.2 / 1.3NA / NA0.002 / 0.07
Conversion of damage mask into polygons1.0 / 0.15.6 / 0.90.2 / 0.11.5 / 0.2
Conversion of polygons containing mean elevation into raster0.8 / 0.30.8 / 0.61.0 / 0.30.8 / 0.4
Conversion of CSM raster into integer values0.004 / 0.040.06 / 0.040.003 / 0.060.05 / 0.4
CSM raster converted into polygons0.9 / 0.11.8 / 0.20.2 / 0.070.9 / 0.1
Selection of polygons with lowest elevation0.3 / 2.30.03 / 2.3NA0.07 / 2.3
Aggregation of polygons of same class0.03 / 0.30.03 / 0.60.003 / 2.10.003 / 0.3
Validation
Photointerpretation of ground truth damage polygon layer 0.001 / Nacc0.001 / NAcc0.001 / Nacc0.001 / Nacc
Generation of "No damage" layer 0.04 / 0.080.04 / 0.080.04 / 0.080.04 / 0.08
Generation of "Matches layer" 0.2 / 0.20.003 / 0.60.004 / 0.10.4 / 0.3
Generation of random points from CSM0.3 / 0.70.3 / 0.70.3 / 0.70.3 / 0.7
Extraction of damage points from random points layer0.009 / 0.30.009 / 0.30.009 / 0.30.009 / 0.3
Extraction of not damage points from random points layer0.5 / 0.80.5 / 0.80.5 / 0.80.5 / 0.8
Extraction of classified points from random points layer0.009 / 0.30.004 / 1.30.006 / 0.40.004 / 1.1
Extraction of matches points from random points layer0.006 / 0.30.05 / 0.30.006 / 0.30.006 / 0.3
TOTAL Data Volume (Mb)10.6114.215.011.0
TOTAL Computation time (sec)11.916.721.627.6

Share and Cite

MDPI and ACS Style

Garcia Millan, V.E.; Rankine, C.; Sanchez-Azofeifa, G.A. Crop Loss Evaluation Using Digital Surface Models from Unmanned Aerial Vehicles Data. Remote Sens. 2020, 12, 981. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12060981

AMA Style

Garcia Millan VE, Rankine C, Sanchez-Azofeifa GA. Crop Loss Evaluation Using Digital Surface Models from Unmanned Aerial Vehicles Data. Remote Sensing. 2020; 12(6):981. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12060981

Chicago/Turabian Style

Garcia Millan, Virginia E., Cassidy Rankine, and G. Arturo Sanchez-Azofeifa. 2020. "Crop Loss Evaluation Using Digital Surface Models from Unmanned Aerial Vehicles Data" Remote Sensing 12, no. 6: 981. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12060981

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop