Next Article in Journal
CIST: An Improved ISAR Imaging Method Using Convolution Neural Network
Next Article in Special Issue
Hybrid Compact Polarimetric SAR for Environmental Monitoring with the RADARSAT Constellation Mission
Previous Article in Journal
Population Characteristics of Loess Gully System in the Loess Plateau of China
Previous Article in Special Issue
Using UAV-Based SOPC Derived LAI and SAFY Model for Biomass and Yield Estimation of Winter Wheat
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Delineation of Crop Field Areas and Boundaries from UAS Imagery Using PBIA and GEOBIA with Random Forest Classification

1
Faculty of Forestry and Environmental Management, University of New Brunswick, 2 Bailey Dr, Fredericton, NB E3B5A3, Canada
2
Department of Geography, The University of Western Ontario, 1151 Richmond Street, London, ON N6A 5C2, Canada
3
A&L Canada Laboratories, 2136 Jetstream Rd., London, ON N5V 3P5, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(16), 2640; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12162640
Submission received: 9 June 2020 / Revised: 2 August 2020 / Accepted: 13 August 2020 / Published: 16 August 2020
(This article belongs to the Special Issue Environmental Mapping Using Remote Sensing)

Abstract

:
Unmanned aircraft systems (UAS) have been proven cost- and time-effective remote-sensing platforms for precision agriculture applications. This study presents a method for automatic delineation of field areas and boundaries that uses UAS multispectral orthomosaics acquired over 7 vegetated fields having a variety of crops in Prince Edward Island (PEI). This information is needed by crop insurance agencies and growers for an accurate determination of crop insurance premiums. The field areas and boundaries were delineated by applying both a pixel-based and an object-based supervised random forest (RF) classifier applied to reflectance and vegetation index images, followed by a vectorization pipeline. Both methodologies performed exceptionally well, resulting in a mean area goodness of fit (AGoF) for the field areas greater than 98% and a mean boundary mean positional error (BMPE) lower than 0.8 m for the seven surveyed fields.

Graphical Abstract

1. Introduction

Today, under the scope of climate change, climatic hazards have a higher probability of occurrence and crop insurance has become more critical to growers. To have proper insurance premiums, both the growers and the crop insurance agencies need to have precise measurements of the cropped field area and boundaries. Field areas and boundaries can be determined by Global Navigation Satellite System (GNSS)-based in situ surveys but they are costly in time and effort. Space-borne optical images have also been tested [1,2,3], but they lack in spatial and temporal resolution. Moreover, this imagery is prone to atmospheric interference and it can be costly when acquired by commercial satellites. An alternative is to use images acquired by unmanned aircraft systems (UASs) which have the advantages of being portable, flexible, and cost and time-effective [4]. UAS imagery is big data, but efficient machine-learning (ML) algorithms with today’s computational capacities have also made the big data process much easier than in the recent past [5].
There are only a few studies about using UAS imagery for field areas and boundaries extraction, mainly for cadastral applications. Red, green and blue (RGB) UAS imagery was used for extracting cadastral boundary features using a deep learning algorithm [6] or using the ENVI (Exelis Visual Information Solutions, Boulder, CO, USA) segmentation and object generation software [7]. In both cases, the fields were either bare soil or vegetated. Bare soil field boundaries were delineated from blue and red UAS imagery with mean-shift clustering and random forests (RF) [8]. Cork oak stand limits were mapped by applying the support vector machine (SVM) and RF to RGB and near-infrared (NIR) imagery [9]. Object-based image analysis with fuzzy clustering was applied to UAS RGB imagery for mapping soil, shrubs, and grass [10]. In the two last cases, the imagery was segmented and analyzed with eCognition (Trimble Inc., USA). Thresholding algorithms were applied on UAS RGB and NIR imagery for crop and weed mapping [11,12] as well as on RGB and NIR imagery for generating skip maps in sugarcane fields [13]. In most of these studies, the method was either too complex or was applied to bare soil fields.
Pixel-based image analysis (PBIA) is a classic image analysis procedure using individual pixels, while geographic object-based image analysis (GEOBIA) is an accomplished ensemble of algorithms and techniques constructing vector objects from aggregating multiple neighboring pixels, highly suitable for very- and ultra-high spatial resolution, such as the UAS imagery used in this study [14,15,16]. GEOBIA generates objects from groups of pixels based on local minima clustering aiming to contain meaningful context to human perception. Objects, or super-pixels, make more sense for UAS spatial resolution, because real-life objects, which are either geometrically or spectrally different, can be described with enough pixels and thus enough spectral information. In general, the image segmentation algorithms construct objects with high relative pixel homogeneity and semantic significance. GEOBIA has been extensively employed in land-use and land-cover applications as reviewed in [17], often outperforming PBIA [15] while keeping the results free of salt-and-pepper noise. For very high-resolution imagery, like the sub-decimeter UAS images, PBIA has multiple disadvantages [18]. PBIA cannot incorporate semantics and does not understand the heterogeneity that exists between different land-cover themes. For classification purposes of UAS image data, PBIA usually involves a very large number of training pixels to cover the spatial and spectral range of each class. Additionally, PBIA results usually suffer from the salt-and-pepper effect related to misclassifications.
Therefore, GEOBIA-based algorithms have been successfully utilized on UAS imagery for weed mapping with RF [19] and k-means [20]. GEOBIA was also applied to convolutional neural networks for wetland monitoring [21] and to decision tree algorithms for land-cover classification of arid rangelands [10].
The purpose of this study is to test simple ML and information extraction techniques to automatically delineate field areas and boundaries over a variety of crops from UAS multispectral imagery acquired over corn, barley, and oat vegetated fields located in Prince Edward Island (PEI). In this study, two ML pipelines have been tested to classify the land features and vectorize each output as crop field boundaries and areas. The first pipeline consists of a PBIA that utilizes RF as the classifier. The second one is a GEOBIA that also utilizes RF as the classifier on the objects generated by segmenting the input UAS imagery. The random forest classifier is a highly robust ensemble classifier with low tuning complexity, scalable, and fast [22].

2. Materials and Methods

2.1. Study Site

The study sites are located in Prince Edward Island, Canada (Figure 1). We surveyed seven fields that have barley, corn, or oat as a crop.

2.2. Unmanned Aircraft System (UAS) Data Collection

The UAS image data were acquired during summer 2018 through multiple surveying campaigns under clear sky conditions (Table 1). The unmanned aerial vehicle (UAV) was a DJI Matrice 100 and the multispectral payload was the MicaSense RedEdge 3 camera (MicaSense Inc., USA).
The RedEdge camera has five sensors capturing the radiation reflected in the blue, green, red, red-edge, and NIR bands of the electromagnetic spectrum (Table 2). The spatial resolution or ground sample distance (GSD in cm/pixel) is common for all five sensors and depends on the UAV flight altitude, the sensors field of view (horizontal: 47.9°, vertical: 36.9°, diagonal: 58.1°), and the focal length (5.4 mm). With a flying altitude of 100 m, the corresponding GSD is about 8 cm/pixel.
The camera performs radiometric calibration of the acquired imagery with data from the downwelling light sensor positioned on the top of the UAV, while the sensors are looking at the nadir position with respect to the UAV flight path. The Micasense RedEdge system also incorporates a GNSS receiver for the accurate geolocation of the captured imagery. To allow the creation of reflectance orthomosaics, a MicaSense RedEdge reflectance calibration panel is employed before every survey to measure the incoming radiation. The UAV flights were conducted at an altitude of 100 m, at noontime for minimizing shadowing effects and under clear sky conditions to avoid cloud shadowing. The flight paths were planned so that there is a minimum of 75% overlapping and side-lapping for neighboring images.

2.3. Data Processing

For this study, the visualization, maps generation, and general Geographic Information Systems (GIS) procedures were handled through the QGIS software [23]. Two image processing methods were tested and assessed for their value in the delineation of field areas and boundaries that was performed with two different levels of information: firstly, on the pixel-level information (PBIA methodology) and secondly on the object-level information (GEOBIA methodology). The steps and procedures used for delineating field areas and boundaries are described in Figure 2 for the PBIA methodology and in Figure 3 for the GEOBIA methodology [15,24].
In both PBIA and GEOBIA methodologies, the first step was to generate georeferenced reflectance orthomosaics from the blue, green, red, red-edge, and NIR images from each flight campaign through the photogrammetric commercial software Pix4D Mapper (Pix4D SA, Switzerland). This process involves Downwelling Light Sensor (DLS) and reflectance panel corrections. All surveyed fields are treated individually and the corresponding rasters were clipped and extracted from each orthomosaic with a surrounding area buffer using Geospatial Data Abstraction Library (GDAL) [25].
The blue and red reflectance orthomosaics were then used to compute a simple ratio vegetation index (VI) between the two reflectance bands as follows: blue-red simple ratio (BRSR) = blue/red [8], which enhances the spectral difference between soil and vegetation and has already been found to be useful in our previous study on bare soil field areas delineation [8].

2.3.1. Random Forests

Both methodologies utilize RF for classification. The classification scheme produces three classes: soil, crop, and other vegetation. With respect to the input features for classification, while being initially tested, Haralick et al. [26]’s Gray Level Co-occurrence Matrix (GLCM) textural features were not used because the classification accuracies were already high enough without using textural features.
Since the RF classifier was introduced by Breiman [27], it has been widely applied in remote sensing [28] due to its robustness and advantages, being easily parametrized and fast. RF is a non-parametric decision-tree classification algorithm that does not assume a normal distribution of the data [27,29]. RF is an ensemble classification model, aggregating a user-defined number of uncorrelated classification trees, every one of which is grown with a set of randomly chosen features from the feature space at each tree node. The final class decision is made through the majority voting of the full ensemble of the trained trees. In this study, we deployed the off-the-self RF implementation from the R programming language v3.5.1 [30]. The number of trees grown ntree was set to 500 and the number of random features mtry for the growth of each decision tree in the ensemble was kept as its default value which is the square root of the total number of features, rounded down.
To determine the classification accuracy, RF performs internally a procedure resulting in an out-of-bag (OOB) error rate. RF works by bootstrapping a sample dataset from the original training data for every tree grown. From the original data, an approximate 37% become “out-of-bag” data due to the replacement strategy in the sampling. These OOB data are then parsed down and classified by the trees that are not trained with them to estimate the classification error by adding up all the discrete OOB errors. The OOB error rate is the complementary percentage of the overall classification accuracy of the RF classification and is a highly robust accuracy indicator. RF also provides a confusion matrix indicating the misclassifications for each class. This matrix allows us to compute the class user’s and producer’s accuracies as well as the overall classification accuracy [31].
Additionally, RF provides importance values to indicate the input features significance for the classification with two metrics in the R implementation: (1) the mean decrease in the Gini index of node impurity when a feature is split when a node is made and (2) the mean decrease of prediction when a feature is permutated. In this study, we display the MeanDecreaseAccuracy plot using R’s ggplot2 [32]. It graphically represents the value of the MeanDecreaseAccuracy metric for each feature which is the difference between the prediction accuracy of the original OOB data and the prediction accuracy when the values of that feature are randomly permuted with the OOB data and predicted down the trained forest. The final mean difference in prediction errors for every feature is a measure of feature importance as it exhibits a decrease in accuracy when the feature is assigned random but realistic values. For unimportant features, the permutation should have minimal to no effect on the accuracy, whereas, for important variables, the accuracy should be significantly reduced.
For training of the RF classifiers for both the PBIA and GEOBIA methodologies, spatially representative and uniformly spread training sites for the three classes were delineated on the original mosaics. These training sites are common for both methodologies.

2.3.2. Jeffries–Matusita Distance

The spectral separability of the three classes was assessed by the distance between the random probability distributions within the feature space in pairs of classes. The metric used is the Jeffries–Matusita (J–M) distance [33,34] (Equation (1)) which considers the Bhattacharya (B) distance [35] (Equation (2)). J–M values range from 0 to 2, with 0 implying the two distributions are entirely correlated and thus the classes are spectrally inseparable, while the J–M upper asymptotic limit of 2 means a full non-correlation between classes and is considered as an indication of excellent class separability, making the J–M transformation more convenient compared to B which falls in the [0, +∞) range.
For each pair of classes C1 and C2, which are two multivariate distributions, assuming data normality, with means μ1, μ2 and covariance matrices σ1, σ2
J M C 1 , C 2 =   2 ( 1   e B C 1 , C 2 )
where
B C 1 , C 2 =   1 8 M C 1 , C 2 +   1 2   {   log [ det ( σ ) ]   log [ det ( σ 1 ) ] 2     log [ det ( σ 2 ) ] 2 }
σ =   ( σ 1 +   σ 2 ) 2
M C 1 , C 2 is the root Mahalanobis distance [36] between the class means with respect to σ computed by Equation (4):
M C 1 , C 2 = ( μ 1 μ 2 ) t   σ 1 ( μ 1 μ 2 )

2.3.3. Pixel-Based Image Analysis (PBIA)

For the PBIA methodology, the RF classification uses the five reflectance bands (blue, green, red, red-edge, NIR) and the BRSR VI as input features. As a result, for 6 features, mtry = 2.
All the training areas were randomly sampled for a number of pixels that is proportional to the area of each training site. For each class, approximately 10,000 random pixels in total were employed for training, resulting in a robust training set with minimized spatially induced bias. The number of training areas and pixels for each field is shown in Table 3.

2.3.4. Geographic Object-Based Image Analysis (GEOBIA)

Pre-Segmentation Processing

To generate objects in the GEOBIA methodology optimally, we performed the following pre-segmentation processing steps for each reflectance band and the BRSR VI used in the multi-resolution algorithm, in order to have a more consistent segmentation:
  • Set the pixels of the lowest and highest 2% values to the 2% and 98% limit pixel values based on the image histogram to minimize the number of outliers following [37].
  • Normalize all pixel values in order to avoid features of greater value ranges dominating the ones in smaller ranges. A linear transformation similar to the linear minimum-maximum normalization as described in [38,39], is used to rescale pixel values to a new range between 0 and 100 using the following (Equation (5))
    N e w _ p i x e l _ v a l u e = ( O l d _ p i x e l _ v a l u e a b a ) 100
    with a = 2% limit pixel value and b = 98% limit pixel value.

Segmentation

The fundamental step of a GEOBIA pipeline is the segmentation of the image. For very high-resolution UAS imagery, which usually has high spectral variability, GEOBIA can be challenging when trying to construct and parametrize the segmentation algorithm for generating meaningful objects. We performed the image segmentation with eCognition 9.4 (Trimble Inc., USA) using the multiresolution segmentation algorithm [40]. This algorithm is one of the most utilized in eCognition. It is an iterative clustering process that minimizes heterogeneity through a local optimization procedure [41]. It begins at the individual pixel level by merging bottom-up regions until convergence to a threshold that represents the object variance limit that is parametrized by the “Scale” parameter, which in turn is weighted by the “Shape” and “Compactness” parameters ranging from 0.1 to 0.9. The larger the Scale value, the higher the allowed variability within each segment, resulting in larger objects generation tolerating more deviation within the homogeneity rules. The Shape parameter determines the spectral information weight that one would like to give to the objects. The higher the Shape value, the less the influence of the spectral information on the segmentation. The Compactness parameter determines how well-defined objects will be with respect to the shape criterion in order to create clear edges. We applied a Scale value of 30, which was selected by a trial and error procedure as it is usually done in GEOBIA, to maintain small objects at the field borders in order to discriminate well between different vegetation types and to consider the natural vegetation transition at the borders. The Shape parameter was set to a value of 0.1 to use the full weight on the spectral information (color = 1-shape) for good discrimination between the different vegetation signatures. Finally, for the Compactness parameter, we used a value of 0.4 which is the median of its value range. The Compactness parameter does not have any significant impact on the objects because of the very low value of the Shape parameter. All the parameter values were kept constant for every field. Finally, all features were weighted by an equal value of 1, as we consider all the input features equally important. Table 4 shows the number of objects generated for each field and their mean area (m2).

Object Feature Generation

The selected object features are related to spectral properties of the objects—but not their textural or spatial (size-shape) properties—because spectral differences are the most meaningful for object classification in land cover classes in the case of crops. The resulting vector file that has all the objects which were generated by the segmentation was exported as a georeferenced tiff file to be used in the following step that consists of generating object features from the original datasets. The following features were generated for each object from the BRSR VI and the blue, green, red, red-edge and NIR reflectance bands (Table 5). These features are related to the objects’ spectral properties and are commonly used for spectral discrimination [42,43,44]:
  • The mean reflectance or VI;
  • The standard deviation (SD) of the reflectance or VI;
  • The median reflectance or VI;
  • The mean reflectance or VI calculated from the range of the 10th to the 90th percentiles of the pixel values distribution, removing outliers for more robust statistics;
  • The SD of the reflectance or VI calculated from the range of the 10th to the 90th percentiles of the pixel values distribution.

Classification

For the GEOBIA RF classification, we used a total of 30 features, resulting in mtry = 5, and the forest was trained with all the objects that fall within the training polygons. The number of training objects per class and their mean area (m2) are shown in Table 6. On average, the training objects represent 8.9% of the study site areas.

2.3.5. Vectorization

The final map with the field borders and areas is produced by inserting every classified image into a vectorization pipeline. For the PBIA classified image, the System for Automated Geoscientific Analyses (SAGA) Majority filter with a radius of 5 pixels [45] was first applied to the classified image in order to clear misclassifications and salt and pepper noise. For the GEOBIA classified image, the filtering step is omitted because the objects have the desired homogeneity and consistency. Afterward, the polygonize function of the GDAL library was employed to vectorize the PBIA majority-filtered image and the GEOBIA classified image. Finally, the field area is defined by extracting the polygon refined from holes that has the largest area, after smoothening the borders with vector buffering and debuffering.

2.4. Accuracy Assessment

2.4.1. Actual Field Boundaries and Areas

The accuracies provided by RF give only a comparison between the classified image and the training areas, but a better accuracy assessment should be made by comparing the field boundaries and areas to the actual ones. Actual field boundaries and areas can be measured in the field using GNSS equipment that records in situ border waypoints at regular intervals. Such equipment has a measurement accuracy that depends considerably on how many GNSS networks the equipment can receive and how many satellites are present for every measurement. The accuracy is also related to factors such as the surrounding environment (trees) and the weather conditions. Also, the method is quite expensive and time-consuming. An alternative is to manually delineate the field boundaries and areas on the RGB raster composite made with the UAS imagery, such as was done in our previous work [8]. Indeed, the RGB composite provides enough visual details because it has a pixel size of ~8cm as it was made with the photogrammetrically stitched UAS images that have an excellent geolocation accuracy.

2.4.2. Area Goodness of Fit (AGoF)

The first accuracy metric of the method compares the manually- and machine-delineated areas. It is an area similarity measure that is called the area goodness of fit (AGoF) [46]. AGoF calculates the overlapping percentage between the manually- and machine-delineated areas for a given field by Equation (6):
A G o F = ( C A C + C ) ( C B C + C )
where:
A is the manually delineated field area (ha);
B is the machine-delineated field area (ha);
C is the area of the intersection between the manually- and the machine-delineated crop polygons (ha);
AC is the absolute value of the total area difference between A and C (ha): A C = | A C | ;
BC is the absolute value of the total area difference between B and C (ha): B C = | B C | .

2.4.3. Boundary Mean Positional Error (BMPE)

The second accuracy metric of the method compares for a given field the manual- and the machine-delineated boundaries. It is a positional similarity measure that is called the boundary mean precision error (BMPE) [8]. To compute BMPE, sequential geographical points are first sampled at a 0.5 m interval along the machine-delineated boundaries. Secondly, the minimum distance between each of these points and the manually delineated boundary is calculated. BMPE is finally the mean distance between the N machine-delineated boundary points and the manually delineated boundary polygon (Equation (7)):
B M P E =   1 N   i = 1 N M i n D i s t i
where:
  • N = number of sample points from the machine delineated boundary;
  • MinDisti = minimum distance between the i-th point of the machine-delineated boundary and the manually delineated boundary (m).

3. Results

3.1. Jeffries–Matusita (J–M) Distance

For the PBIA classification, the J–M distances are computed for each pair of the bare soil, crop, and other vegetation classes, using the original feature space of the BRSR VI, blue, green, red, red-edge, NIR reflectance and the PBIA training areas (Table 7). On average, the J–M distances are very high, indicating very good class spectral separabilities. The lowest average (1.75) is between crop and other vegetation classes, with the lowest one being for the Barley3 field (1.32). The spectral separability for the Crop-Soil and Soil-Vegetation pairs are excellent, with average J–M distances higher than 1.96.
For the GEOBIA classification, the J–M distances are computed for each pair of bare soil, crop, and other vegetation classes from the training objects with the blue, green, red, red-edge, NIR reflectance, and BRSR images (Table 8). The feature space of the J–M distances is the mean, SD, and median from all the pixel values within each object and the mean and SD from the pixel values after removing their 10% lower and higher values. All three pairs of classes have excellent class separabilities, with a mean J–M distance higher than 1.99 and a J–M distance for each field higher than 1.96. The J–M distances for the GEOBIA classification are on average and for every individual field higher than those with the PBIA training areas, showing that a better classification can be potentially achieved with the GEOBIA classification and that the feature selection is very important for discriminating between the probability distributions of each pair of classes.

3.2. Classification Accuracy

The OOB error rate from the RF is presented in Table 9 for each field in the case of the PBIA and GEOBIA classification. For both classifications, the lowest OOB error rate occurs for the Corn3 field, while the highest OOB error rate occurs for the Barley2 field.
The related confusion matrices including the user’s, producer’s, and overall classification accuracies (UA, PA, OA) and errors of omission and commission (EO, EC) are shown for each field in Table 10 for the PBIA RF classifier applied to the blue, green, red, red-edge, NIR reflectance and BRSR VI features and in Table 11 for the GEOBIA RF classifier applied to objects’ generated features for the blue, green, red, red-edge, NIR reflectance and the BRSR VI. In both cases, the highest overall classification accuracies and the lowest OOB error rates were observed for the Corn3 field. This is the opposite in the case of the Barley2 field, mainly because of confusion between the crop and the bordering vegetation due to a vegetation transition and mixture at the field borders. On average, the overall classification accuracy with the PBIA classification was slightly lower than with the GEOBIA classification (97.06% versus 97.49%). This is also the case for every individual field.

3.3. Random Forest (RF) Variable Importance

The variable importance plots show by decreasing order each feature’s MeanDecreaseAccuracy value for the RF classification. These values have no physical meaning apart from a feature importance comparison metric within the feature space. For the PBIA-RF classification, the red-edge and NIR reflectance images appear to be the most important input features (Figure 4). For the GEOBIA-RF classification, the red and NIR reflectance images appear to be the most important input features (Figure 5).

3.4. Field Area and Border Maps

Both PBIA (Figure 6) and GEOBIA (Figure 7) methodologies show very satisfying results in regard to the final border vector products. The field border maps after the vectorization pipelines show a good fit comparing the machine- and the manually delineated borders. The GEOBIA classification field borders are cleaner and more robust. The PBIA misclassifications between all classes have to be dealt with using greedy expensive algorithms for noise removal. For PBIA, the border comparison displays the best result for Corn1 and worst for Oat (Figure 6). For GEOBIA, the best result is achieved for Corn1 and the worst for Barley1 (Figure 7). A visual comparison between the manual and machine delineated field borders allows the detection of several minor factors that lead to misclassifications and border skewness explaining most of the divergence between the machine- and the manually delineated borders. For all the crop fields at their borders, such factors are surrounding tree canopies and canopy shadows on top of the crops and transitional mixture of wild weeds and crops.

3.5. Accuracy Metrics

Table 12 compares the AGoF (in %) between the PBIA and GEOBIA classifications. There is no important difference between both classifications for the mean AGoFs (98.91% and 98.78%, respectively). Concerning BMPE (in m), it is on average slightly higher with the GEOBIA classification (0.76 m) than with the PBIA classifications (0.68 m), indicating that PBIA is a slightly more accurate method on average (Table 13). For most of the fields, the difference is less than 0.1 m and thus is not important.

4. Discussion

We performed PBIA and GEOBIA with RF classification to delineate crop field boundaries and extract field areas using UAS multispectral imagery acquired over corn, oat, and barley vegetated fields. In both cases, we achieved an average classification accuracy higher than 97%, a mean AGoF greater than 98%, and a mean BMPE lower than 0.8 m. All these values are on the same order of magnitude as Vlachopoulos et al. [8] who applied the mean-shift clustering and the RF classifier to perform field areas and boundaries delineation from UAS imagery acquired over bare soil fields. The average overall classification accuracies in this study were higher than De Luca et al. [9] (89–97.6%) who applied RF and SVM to RGB and NIR UAS imagery and on the same order of magnitude as Laliberte and Rango [10] (95–100%), who used GEOBIA with decision tree analysis on UAS RGB imagery soil, shrubs, and grass demarcation. The lower accuracies we obtained with the Barley1 and Oat fields are mainly due to the surrounding tree shadows and tree canopies on top of the crops, the transitional mixture of wild weeds and crops at the field boundaries, and the sparsely crop-planted areas at the borders.
Since the AGoF and BMPE values do not differ too much between the two methodologies, this indicates that the methods can be considered equally strong. However, each methodology has its advantages and disadvantages. The PBIA methodology has fewer steps but is more time- and computing resource-consuming mainly because of the majority filtering step. For the GEOBIA methodology, such filtering is not needed because the classified objects are homogenous and do not have individual pixels, as shown in Figure 7. Because the GEOBIA methodology classifies objects but not pixels, the number of training samples are much lower than those of PBIA, thus resulting in a quicker RF training and classification, even if the feature space for the GEOBIA method has a higher number of input features than that of the PBIA method. The feature space in each methodology is related to the fact that we used either pixels or objects. In PBIA, the features are five reflectance bands and one VI. For GEOBIA, the feature space is the mean, median, standard deviation of the pixel values and the mean and standard deviation of the pixel values in the range of the 10th and 90th percentile associated with the objects for each reflectance band and the BRSR VI. As a result, a direct comparison cannot be done for the two feature spaces. However, it is worth mentioning that we use the same reflectance bands and VI both for PBIA and GEOBIA, thus making the original information a common ground in both methodologies. The high number of features used in GEOBIA is needed firstly to add discriminatory information to each object and secondly because the training dataset in GEOBIA is limited as it is related to a limited number of objects. In order to construct a robust training dataset and an informative feature space, one needs to have multiple spectral characteristics of the objects that allow having the appropriate number of input features for the classification, by excluding outliers.
The GEOBIA method utilizes the eCognition software for image segmentation, which is a commercial product that may not always be available. It would be interesting to explore other segmentation methods such as the open-source Orfeo Toolbox (OTB) [47] and compare their performance.
Both methods rely on random forests as the classifier. RF is an open-source ML algorithm that is easily implemented, cost-effective, and conveniently replicated and generalized to other crops and fields. The training data for the RF algorithm have a very high level of randomness for both PBIA and GEOBIA due to the RF methodology used for feature selection and bootstrapping. Specifically, training samples for PBIA are randomly selected from all the training areas. This randomness reduces the bias from both the user and algorithm sides making both methodologies scalable. The proposed methodologies are better performing in comparison to the convolutional neural network method proposed by Crommelinck et al. [6] as the latter requires distinct features such as roads and water bodies to detect cadastral plot boundaries. Our methodologies do not need multitemporal series of images such as by DeLuca et al. [9] and Persello et al. [1]. The PBIA methodology used in this study does not need commercial software, whereas Fetai et al. [7] employ ENVI which may be inaccessible due to commercial costs.
The time to perform both methodologies depends on the computational power of the computer used and the size of the UAS imagery. For example, to apply the proposed methodologies, computational time might be in the order of seconds or a few minutes with a standard server-driven solution and the order of minutes with a personal computer. For the fields we tested, the proposed methodologies were more efficient than the manual delineation. Additionally, manual delineation is quite demanding as fields can be very irregular. Also, it is highly laborious to outline the borders of a crop field with hundreds or thousands of geographical points. Another advantage of the proposed methodology is that it can be used on any number and size of fields. It is therefore service-driven, scalable, and refraining from user bias.

5. Conclusions

This study presents a method to delineate field areas and boundaries from UAS multispectral images that were acquired over 7 vegetated fields having various crops (barley, corn, and oat). The imagery was classified using the non-parametric supervised classifier random forests which was applied either to the raw images (pixel-based image analysis or PBIA) or the images that were previously segmented with the multiresolution segmentation algorithm implemented in eCognition (geographic object-based image analysis, or GEOBIA). Both methodologies classify the images in three classes (soil, crop, other vegetation) using the blue, green, red, red-edge, NIR reflectance bands and the BRSR VI. For PBIA, the RF classification used the class statistics derived from the training areas. The GEOBIA classification used the following statistical parameters for each object from the same training areas: mean reflectance or VI; standard deviation (SD) of the reflectances or VIs; median reflectance or VI; mean reflectance or VI after removing the lowest and highest 10% reflectance or VI pixel values; SD of the reflectances or VIs after removing the lowest and highest 10% reflectance or VI pixel values. The accuracy of both methods was assessed using the classification accuracies and two metrics following Vlachopoulos et al. [8]: area goodness of fit (AGoF) for the field area and boundary mean positional error (BMPE) for the crop borders. Both classifications performed exceptionally well with an average accuracy higher than 97%, leading to a mean AGoF greater than 98% and a mean BMPE lower than 0.8 m. Also, both classification methods rely on random forests, an open-source ML algorithm that is easily implemented, highly efficient, and can be replicated and generalized to other crops and fields.
Our excellent results indicate only some minor divergences between the photo-interpreted and machine-delineated field areas and boundaries which depend on some field-specific characteristics, such as tree shadows, tree canopies, the transitional mixture of vegetation and crop plants at the field boundaries, and sparsely cropped areas at the borders. Further research is needed to investigate the effect of these factors on the results. While there were no significant differences between both classification methods in terms of AGoF and BMPE values, the GEOBIA classification shows more promising results than the PBIA classification, because the first method is faster to perform, gives higher classification accuracies and class separability, does not need a post-classification filtering process and it is less intensive in terms of data processing. However, compared to the PBIA classification, it depends on the use of the eCognition software which is not free. Future research is needed to perform GEOBIA segmentation with open-source segmentation algorithms. Also, the GEOBIA classification we tested uses a single segmentation step and further work is needed to assess whether an eCognition hierarchy of processes can refine and simplify the segmentation and thus the classification. Our results were obtained using multispectral UAS imagery acquired under specific flight altitudes and clear sky conditions as well as over particular crops (barley, oat, corn). Further work is needed to test UAS imagery acquired at a different flight altitude that leads to a different GSD. It will also be interesting to test UAS imagery acquired under diverse weather conditions and on other crops. Finally, the scalability of the proposed methodologies can be tested and generalized with sufficient training data from other crops and land-cover classes. Our study is an important step towards the development of an operational and efficient methodology for the automatic delineation of field boundaries and areas from UAS multispectral data with a combined ML pipeline and vectorization steps, to quickly acquire field areas and boundaries for various applications in precision agriculture, for example, determining crop insurance premiums.

Author Contributions

Conceptualization, O.V., B.L., J.W.; methodology, investigation, formal analysis, O.V.; writing—original draft preparation, O.V.; writing—review and editing, B.L., J.W., A.H., A.L.; funding acquisition, B.L., J.W., G.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Sciences and Engineering Research Council of Canada (NSERC Canada) grant number CRDPJ507141-16 awarded to Brigitte Leblon (University of New Brunswick) and Jinfei Wang (University of Western Ontario) and by a New Brunswick Innovation Foundation grant awarded to Brigitte Leblon (University of New Brunswick).

Acknowledgments

The authors sincerely appreciate the field personnel that helped for the overall experiment data acquisition: Antony Bastarache, Brennan Gaudet, Nafsika Papageorgiou, and Timothy Roberts.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Persello, C.; Tolpekin, V.A.; Bergado, J.R.; de By, R.A. Delineation of agricultural fields in smallholder farms from satellite images using fully convolutional networks and combinatorial grouping. Remote Sens. Environ. 2019, 231, 111253. [Google Scholar] [CrossRef]
  2. Graesser, J.; Ramankutty, N. Detection of cropland field parcels from Landsat imagery. Remote Sens. Environ. 2017, 201, 165–180. [Google Scholar] [CrossRef] [Green Version]
  3. Wassie, Y.A.; Koeva, M.N.; Bennett, R.M.; Lemmen, C.H.J. A procedure for semi-automated cadastral boundary feature extraction from high-resolution satellite imagery. J. Spat. Sci. 2018, 63, 75–92. [Google Scholar] [CrossRef] [Green Version]
  4. Zhang, C.; Kovacs, J.M. The application of small unmanned aerial systems for precision agriculture: A review. Precis. Agric. 2012, 13, 693–712. [Google Scholar] [CrossRef]
  5. Kamilaris, A.; Kartakoullis, A.; Prenafeta-Boldú, F.X. A review on the practice of big data analysis in agriculture. Comput. Electron. Agric. 2017, 143, 23–37. [Google Scholar] [CrossRef]
  6. Crommelinck, S.; Koeva, M.; Yang, M.Y.; Vosselman, G. Application of Deep Learning for Delineation of Visible Cadastral Boundaries from Remote Sensing Imagery. Remote Sens. 2019, 11, 2505. [Google Scholar] [CrossRef] [Green Version]
  7. Fetai, B.; Oštir, K.; Kosmatin Fras, M.; Lisec, A. Extraction of Visible Boundaries for Cadastral Mapping Based on UAV Imagery. Remote Sens. 2019, 11, 1510. [Google Scholar] [CrossRef] [Green Version]
  8. Vlachopoulos, O.; Leblon, B.; Wang, J.; Haddadi, A.; Larocque, A.; Patterson, G. Delineation of bare soil field areas from Unmanned Aircraft System imagery with Mean Shift clustering and Random Forest classification. Can. J. Remote Sens. 2020. [Google Scholar] [CrossRef]
  9. De Luca, G.; Silva, J.M.; Cerasoli, S.; Araújo, J.; Campos, J.; Di Fazio, S.; Modica, G. Object-Based Land Cover Classification of Cork Oak Woodlands using UAV Imagery and Orfeo ToolBox. Remote Sens. 2019, 11, 1238. [Google Scholar] [CrossRef] [Green Version]
  10. Laliberte, A.S.; Rango, A. Image Processing and Classification Procedures for Analysis of Sub-decimeter Imagery Acquired with an Unmanned Aircraft over Arid Rangelands. GISci. Remote Sens. 2011, 48, 4–23. [Google Scholar] [CrossRef] [Green Version]
  11. Peña, J.M.; Torres-Sánchez, J.; de Castro, A.I.; Kelly, M.; López-Granados, F. Weed Mapping in Early-Season Maize Fields Using Object-Based Analysis of Unmanned Aerial Vehicle (UAV) Images. PLoS ONE 2013, 8, e77151. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Torres-Sánchez, J.; López-Granados, F.; Peña, J.M. An automatic object-based method for optimal thresholding in UAV images: Application for vegetation detection in herbaceous crops. Comput. Electron. Agric. 2015, 114, 43–52. [Google Scholar] [CrossRef]
  13. De Souza, C.H.W.; Lamparelli, R.A.C.; Rocha, J.V.; Magalhães, P.S.G. Mapping skips in sugarcane fields using object-based analysis of unmanned aerial vehicle (UAV) images. Comput. Electron. Agric. 2017, 143, 49–56. [Google Scholar] [CrossRef]
  14. Blaschke, T.; Lang, S.; Hay, G. Object-Based Image Analysis: Spatial Concepts for Knowledge-Driven Remote Sensing Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  15. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
  16. Arvor, D.; Durieux, L.; Andrés, S.; Laporte, M.A. Advances in Geographic Object-Based Image Analysis with ontologies: A review of main contributions and limitations from a remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 2013, 82, 125–137. [Google Scholar] [CrossRef]
  17. Ma, L.; Li, M.; Ma, X.; Cheng, L.; Du, P.; Liu, Y. A review of supervised object-based land-cover image classification. ISPRS J. Photogramm. Remote Sens. 2017, 130, 277–293. [Google Scholar] [CrossRef]
  18. Blaschke, T.; Strobl, J. What’s wrong with pixels? Some recent developments interfacing remote sensing and GIS. Z. Geoinf. 2001, 14, 12–17. [Google Scholar]
  19. De Castro, A.I.; Torres-Sánchez, J.; Peña, J.M.; Jiménez-Brenes, F.M.; Csillik, O.; López-Granados, F. An Automatic Random Forest-OBIA Algorithm for Early Weed Mapping between and within Crop Rows Using UAV Imagery. Remote Sens. 2018, 10, 285. [Google Scholar] [CrossRef] [Green Version]
  20. Huang, H.; Lan, Y.; Yang, A.; Zhang, Y.; Wen, S.; Deng, J. Deep learning versus Object-based Image Analysis (OBIA) in weed mapping of UAV imagery. Int. J. Remote Sens. 2020, 41, 3446–3479. [Google Scholar] [CrossRef]
  21. Liu, T.; Abd-Elrahman, A. Deep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification. ISPRS J. Photogramm. Remote Sens. 2018, 139, 154–170. [Google Scholar] [CrossRef]
  22. Criminisi, A.; Shotton, J. Decision Forests for Computer Vision and Medical Image Analysis; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  23. QGIS Development Team. QGIS Geographic Information System; Open Source Geospatial Foundation Project. Available online: http://qgis.osgeo.org (accessed on 1 June 2019).
  24. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Queiroz Feitosa, R.; van der Meer, F.; van der Werff, H.; van Coillie, F.; et al. Geographic Object-Based Image Analysis—Towards A New Paradigm. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. GDAL/OGR contributors. GDAL/OGR Geospatial Data Abstraction Software Library; Open Source Geospatial Foundation. Available online: https://gdal.org (accessed on 1 June 2019).
  26. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  27. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  28. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  29. Breiman, L. Manual for Setting up, Using, and Understanding Random Forest V4.0; 2003. Available online: https://www.stat.berkeley.edu/~breiman/Using_random_forests_v4.0.pdf (accessed on 1 August 2019).
  30. Liaw, A.; Wiener, M. Classification and regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
  31. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  32. Wickham, H. ggplot2: Elegant Graphics for Data Analysis; Springer: New York, NY, USA, 2016. [Google Scholar]
  33. Wacker, A.G.; Landgrebe, D.A. Minimum distance classification in remote sensing. LARS Tech. Rep. 1972. [Google Scholar]
  34. Richards, J.A.; Jia, X. Remote Sensing Digital Image Analysis; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3. [Google Scholar]
  35. Bhattacharyya, A. On a measure of divergence between two statistical populations defined by their probability distributions. Bull. Calcutta Math. Soc. 1943, 35, 99–109. [Google Scholar]
  36. Varmuza, K.; Filzmoser, P. Introduction to Multivariate Statistical Analysis in Chemometrics; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  37. Pyle, D. Data Preparation for Data Mining; Morgan Kaufmann: San Fransisco, CA, USA, 1999; ISBN 978-1-55860-529-9. [Google Scholar]
  38. Yuan, D.; Elvidge, C.D. Comparison of relative radiometric normalization techniques. ISPRS J. Photogramm. Remote Sens. 1996, 51, 117–126. [Google Scholar] [CrossRef]
  39. Umbaugh, S.E. Digital Image Processing and Analysis: Applications with MATLAB and CVIPtools; CRC Press: Boca Raton, FL, USA, 2017; ISBN 978-1-4987-6607-4. [Google Scholar]
  40. Baatz, M.; Schape, A. Multiresolution segmentation: An optimization approach for high quality multi-scale image segmentation. In Angewandte Geographische Informations-Verarbeitung, XII; Strobl, J., Blaschke, T., Griesbner, G., Eds.; Wichmann Verlag: Karlsruhe, Germany, 2000; pp. 12–23. [Google Scholar]
  41. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
  42. Thenkabail, P.S. Remotely Sensed Data Characterization, Classification, and Accuracies; CRC Press: Boca Raton, FL, USA, 2015; ISBN 978-1-4822-1787-2. [Google Scholar]
  43. Hamedianfar, A.; Shafri, H.Z.M. Detailed intra-urban mapping through transferable OBIA rule sets using WorldView-2 very-high-resolution satellite images. Int. J. Remote Sens. 2015, 36, 3380–3396. [Google Scholar] [CrossRef]
  44. Su, T.; Zhang, S.; Liu, T. Multi-Spectral Image Classification Based on an Object-Based Active Learning Approach. Remote Sens. 2020, 12, 504. [Google Scholar] [CrossRef] [Green Version]
  45. Conrad, O.; Bechtel, B.; Bock, M.; Dietrich, H.; Fischer, E.; Gerlitz, L.; Wehberg, J.; Wichmann, V.; Böhner, J. System for Automated Geoscientific Analyses (SAGA) v. 2.1. 4. Geosci. Model Dev. 2015, 8, 1991–2007. [Google Scholar] [CrossRef] [Green Version]
  46. Hargrove, W.W.; Hoffman, F.M.; Hessburg, P.F. Mapcurves: A quantitative method for comparing categorical maps. J. Geogr. Syst. 2006, 8, 187. [Google Scholar] [CrossRef]
  47. Grizonnet, M.; Michel, J.; Poughon, V.; Inglada, J.; Savinaud, M.; Cresson, R. Orfeo ToolBox: Open source processing of remote sensing images. Open Geospat. Data Softw. Stand. 2017, 2, 15. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Location of the study sites in Prince Edward Island. ESRI Satellite (ArcGIS/World Imagery).
Figure 1. Location of the study sites in Prince Edward Island. ESRI Satellite (ArcGIS/World Imagery).
Remotesensing 12 02640 g001
Figure 2. Flowchart of the pixel-based image analysis (PBIA) methodology.
Figure 2. Flowchart of the pixel-based image analysis (PBIA) methodology.
Remotesensing 12 02640 g002
Figure 3. Flowchart of the Geographic Object-Based Image Analysis (GEOBIA) methodology.
Figure 3. Flowchart of the Geographic Object-Based Image Analysis (GEOBIA) methodology.
Remotesensing 12 02640 g003
Figure 4. Variable importance plot from the pixel-based image analysis random forest classifier for the following fields: (a) Barley1; (b) Barley2; (c) Barley3; (d) Corn1; (e) Corn2; (f) Corn3; (g) Oat.
Figure 4. Variable importance plot from the pixel-based image analysis random forest classifier for the following fields: (a) Barley1; (b) Barley2; (c) Barley3; (d) Corn1; (e) Corn2; (f) Corn3; (g) Oat.
Remotesensing 12 02640 g004
Figure 5. Variable importance plot from the geographic object-based image analysis random forest classification for the following fields: (a) Barley1; (b) Barley2; (c) Barley3; (d) Corn1; (e) Corn2; (f) Corn3; (g) Oat.
Figure 5. Variable importance plot from the geographic object-based image analysis random forest classification for the following fields: (a) Barley1; (b) Barley2; (c) Barley3; (d) Corn1; (e) Corn2; (f) Corn3; (g) Oat.
Remotesensing 12 02640 g005aRemotesensing 12 02640 g005b
Figure 6. Red, green and blue (RGB) image composites comparing the manually-delineated and PBIA-delineated borders for the following fields: (a) Barley1, (b) Barley2, (c) Barley3, (d) Corn1, (e) Corn2, (f) Corn3, (g) Oat; (h) Legend. The manually delineated borders are represented in yellow. The machine delineated borders are represented in red.
Figure 6. Red, green and blue (RGB) image composites comparing the manually-delineated and PBIA-delineated borders for the following fields: (a) Barley1, (b) Barley2, (c) Barley3, (d) Corn1, (e) Corn2, (f) Corn3, (g) Oat; (h) Legend. The manually delineated borders are represented in yellow. The machine delineated borders are represented in red.
Remotesensing 12 02640 g006aRemotesensing 12 02640 g006b
Figure 7. RGB image composites comparing the manually-delineated and GEOBIA-delineated borders for the following fields: (a) Barley1, (b) Barley2, (c) Barley3, (d) Corn1, (e) Corn2, (f) Corn3, (g) Oat; (h) Legend. The manually delineated borders are represented in yellow. The machine delineated borders are represented in red.
Figure 7. RGB image composites comparing the manually-delineated and GEOBIA-delineated borders for the following fields: (a) Barley1, (b) Barley2, (c) Barley3, (d) Corn1, (e) Corn2, (f) Corn3, (g) Oat; (h) Legend. The manually delineated borders are represented in yellow. The machine delineated borders are represented in red.
Remotesensing 12 02640 g007aRemotesensing 12 02640 g007b
Table 1. Unmanned aircraft system (UAS) flight parameters.
Table 1. Unmanned aircraft system (UAS) flight parameters.
Flight NumberField Number of ImagesDateTime (UTC-3)Area (m2)
F.1Barley136152018-07-2011:30310,131
Barley234,832.00
Barley3 61,282.00
F.2Corn113452018-08-2112:3089,463.00
F.3Corn224152018-08-2113:00309,163
F.4Corn324502018-08-2113:30234,493
F.5Oat26752018-07-2014:00301,261.40
Table 2. MicaSense RedEdge band characteristics.
Table 2. MicaSense RedEdge band characteristics.
BandBlueGreenRedRed-EdgeNIR
Range (nm)465–485550–570663–673712–722820–860
Bandwidth (nm)2020101040
Central wavelength (nm)475560668717840
Table 3. Number of training areas per class for PBIA.
Table 3. Number of training areas per class for PBIA.
Figure 1. SoilCropOther VegetationTotal
AreasPixelsAreasPixelsAreasPixelsAreasPixels
Barley11610,0062510,0132610,0136730,032
Barley21110,0021510,0081510,0084130,018
Barley31510,0031510,0082110,0115130,022
Corn12310,0041610,0092010,0095930,022
Corn22310,0112810,0141710,0076830,032
Corn31310,0082310,0082110,0065730,022
Oat2210,0102010,0102110,0146330,034
Table 4. Number of objects and objects mean area (m2) per field.
Table 4. Number of objects and objects mean area (m2) per field.
FieldNumber of ObjectsMean Object Area
Barley155,6255.6
Barley289443.9
Barley311,1535.5
Corn123,1183.9
Corn244,0577.0
Corn358,8973.9
Oat38,5587.8
Table 5. List of the GEOBIA object features with their associated names used in the study.
Table 5. List of the GEOBIA object features with their associated names used in the study.
MeanStandard Deviation (SD)MedianMean - PercentilesSD - Percentiles
Blue reflectanceBlue_MeanBlue_SDBlue_MedianBlue_Perc_MeanBlue_ Perc_SD
Green reflectanceGreen_MeanGreen_SDGreen_MedianGreen_ Perc_MeanGreen_ Perc_SD
Red reflectanceRed_MeanRed_SDRed_MedianRed_ Perc_MeanRed_ Perc_SD
Red-Edge reflectanceRedEdge_MeanRedEdge_SDRedEdge_MedianRedEdge_ Perc_MeanRedEdge_ Perc_SD
NIR reflectanceNIR_MeanNIR_SDNIR_MedianNIR_ Perc_MeanNIR_ Perc_SD
BRSR VIBRSR_MeanBRSR_SDBRSR_MedianBRSR_ Perc_MeanBRSR_ Perc_SD
Table 6. Number of training objects per class for GEOBIA and their mean object area (m2).
Table 6. Number of training objects per class for GEOBIA and their mean object area (m2).
FieldSoilCropOther VegetationTotalMean Object Area
Barley12624526524110,0292.8
Barley227150254220711.8
Barley3128161071624542.4
Corn1862419180843131.9
Corn23214172204965423.6
Corn32476387391710,5512.2
Oat1941930319253163.6
Table 7. Jeffries–Matusita distance between the three classes computed with the pixel-based image analysis training areas using the blue, green, red, red-edge, NIR reflectance, and BRSR VI features.
Table 7. Jeffries–Matusita distance between the three classes computed with the pixel-based image analysis training areas using the blue, green, red, red-edge, NIR reflectance, and BRSR VI features.
FieldCrop-SoilCrop-VegetationSoil-Vegetation
Barley11.9999251.7826631.99192
Barley21.9735031.6026531.962501
Barley31.9999991.3225891.999988
Corn11.9999991.8785681.999456
Corn21.9999991.9292771.989666
Corn31.9999841.8984021.998441
Oat1.9987871.8467921.961582
Average1.9960281.7515631.986222
Table 8. Jeffries–Matusita distance between the three classes in the case of the geographic object-based image analysis classification using the full GEOBIA feature list of Table 5.
Table 8. Jeffries–Matusita distance between the three classes in the case of the geographic object-based image analysis classification using the full GEOBIA feature list of Table 5.
Field.Crop-SoilCrop-VegetationSoil-Vegetation
Barley11.9999991.9996721.999999
Barley21.9998541.9602411.999805
Barley31.9999991.9915111.999999
Corn11.9999991.9999451.999999
Corn21.9999991.9999921.999643
Corn31.9999991.9999891.999996
Oat1.9999991.9999971.999999
Average1.9999781.9930491.999920
Table 9. Random forest pixel-based image analysis and geographic object-based image analysis out-of-bag error rates (%) as a function of the field.
Table 9. Random forest pixel-based image analysis and geographic object-based image analysis out-of-bag error rates (%) as a function of the field.
FieldPBIAGEOBIA
Barley12.962.17
Barley27.176.23
Barley35.114.97
Corn11.381.32
Corn21.331.02
Corn30.890.65
Oat1.771.20
Average2.942.51
Table 10. Confusion matrices and associated class user’s accuracies, producer’s accuracies, overall accuracies, errors of omission and errors of commission obtained by applying the PBIA random forest classifier to the blue, green, red, red-edge, near-infrared reflectance and blue-red simple ratio images. The bold diagonal number elements are the correctly classified pixels for each class.
Table 10. Confusion matrices and associated class user’s accuracies, producer’s accuracies, overall accuracies, errors of omission and errors of commission obtained by applying the PBIA random forest classifier to the blue, green, red, red-edge, near-infrared reflectance and blue-red simple ratio images. The bold diagonal number elements are the correctly classified pixels for each class.
FieldClassSoilCropOther VegetationUA (%)EC (%)OA (%)
Barley1Soil992218399.160.8497.04
Crop13963536596.223.78
Other vegetation58368958795.754.25
PA (%)99.2996.3195.54
EO (%)0.713.694.46
Barley2Soil9937402599.350.6592.83
Crop154904281290.359.65
Other vegetation711051888688.7911.21
PA (%)97.7989.2391.39
EO (%)2.2110.778.61
Barley3Soil99984199.950.0594.89
Crop5951748695.094.91
Other vegetation61031897489.6410.36
PA (%)99.8990.1994.85
EO (%)0.119.815.15
Corn1Soil99980699.940.0698.62
Crop0980220797.932.07
Other vegetation9192980897.992.01
PA (%)99.9198.0897.87
EO (%)0.091.922.13
Corn2Soil997403799.630.3798.67
Crop4984316798.291.71
Other vegetation81109981798.101.90
PA (%)99.1598.9097.96
EO (%)0.851.102.04
Corn3Soil998212599.740.2699.11
Crop0988412498.761.24
Other vegetation3682988898.821.18
PA (%)99.6499.1798.52
EO (%)0.360.831.48
OatSoil997413599.640.3698.23
Crop8978621697.762.24
Other vegetation36235974397.292.71
PA (%)99.5697.6597.49
EO (%)0.442.352.51
Table 11. Confusion matrices and associated class user’s accuracies, producer’s accuracies, overall accuracies, errors of omission and errors of commission obtained by applying the GEOBIA random forest classifier to the blue, green, red, red-edge, near-infrared reflectance and blue-red simple ratio images. The bold diagonal number elements are the correctly classified objects for each class.
Table 11. Confusion matrices and associated class user’s accuracies, producer’s accuracies, overall accuracies, errors of omission and errors of commission obtained by applying the GEOBIA random forest classifier to the blue, green, red, red-edge, near-infrared reflectance and blue-red simple ratio images. The bold diagonal number elements are the correctly classified objects for each class.
FieldClassSoilCropOther VegetationUA (%)EC (%)OA (%)
Barley1Soil2522896.183.8297.83
Crop3444011497.432.57
Other vegetation784511998.251.75
PA (%)96.1898.1097.67
EO (%)3.821.902.33
Barley2Soil2310856.1043.9093.77
Crop214122797.992.01
Other vegetation28050786.0813.92
PA (%)85.1994.0193.54
EO (%)14.815.996.46
Barley3Soil1262197.672.3395.03
Crop215243397.752.25
Other vegetation08468289.0310.97
PA (%)98.4494.6695.25
EO (%)1.565.344.75
Corn1Soil860594.515.4998.68
Crop024003398.641.36
Other vegetation019177098.941.06
PA (%)100.0099.2197.90
EO (%)0.000.792.10
Corn2Soil31001395.984.0298.98
Crop141572899.310.69
Other vegetation1015200898.771.23
PA (%)96.5799.6498.00
EO (%)3.430.362.00
Corn3Soil24131393.776.2399.36
Crop063662899.560.44
Other vegetation618387699.380.62
PA (%)97.5799.6798.95
EO (%)2.430.331.05
OatSoil1880796.413.5998.80
Crop219194097.862.14
Other vegetation411314599.530.47
PA (%)96.9199.4398.53
EO (%)3.090.571.47
(*)mean reflectance or VI; standard deviation (SD) of the reflectance or VI; median reflectance or VI; mean reflectance or VI after removing the lowest and highest 10% reflectance or VI pixel values; SD of the reflectance or VI after removing the lowest and highest 10% reflectance or VI pixel values.
Table 12. Area goodness of fit (%) computed by comparing the manually- and machine-delineated field areas for the pixel-based image analysis and geographic object-based image analysis classifications.
Table 12. Area goodness of fit (%) computed by comparing the manually- and machine-delineated field areas for the pixel-based image analysis and geographic object-based image analysis classifications.
FieldPBIAGEOBIA(PBIA-GEOBIA)
Barley198.9198.400.51
Barley298.8998.320.57
Barley399.0398.960.07
Corn199.1099.030.07
Corn299.2699.47–0.21
Corn399.7499.680.06
Oat97.4897.62–0.14
Average98.9198.780.13
Table 13. Boundary mean positional error (in meters) computed by comparing the manually- and machine-delineated field boundaries for the pixel-based image analysis and geographic object-based image analysis classifications.
Table 13. Boundary mean positional error (in meters) computed by comparing the manually- and machine-delineated field boundaries for the pixel-based image analysis and geographic object-based image analysis classifications.
FieldPBIAGEOBIA(PBIA-GEOBIA)
Barley11.051.72−0.67
Barley20.620.69−0.07
Barley30.580.63−0.05
Corn10.260.240.02
Corn20.610.440.17
Corn30.320.37−0.05
Oat1.341.210.13
Average0.680.76−0.08

Share and Cite

MDPI and ACS Style

Vlachopoulos, O.; Leblon, B.; Wang, J.; Haddadi, A.; LaRocque, A.; Patterson, G. Delineation of Crop Field Areas and Boundaries from UAS Imagery Using PBIA and GEOBIA with Random Forest Classification. Remote Sens. 2020, 12, 2640. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12162640

AMA Style

Vlachopoulos O, Leblon B, Wang J, Haddadi A, LaRocque A, Patterson G. Delineation of Crop Field Areas and Boundaries from UAS Imagery Using PBIA and GEOBIA with Random Forest Classification. Remote Sensing. 2020; 12(16):2640. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12162640

Chicago/Turabian Style

Vlachopoulos, Odysseas, Brigitte Leblon, Jinfei Wang, Ataollah Haddadi, Armand LaRocque, and Greg Patterson. 2020. "Delineation of Crop Field Areas and Boundaries from UAS Imagery Using PBIA and GEOBIA with Random Forest Classification" Remote Sensing 12, no. 16: 2640. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12162640

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop