Next Article in Journal
Reply to Mas et al.: Comment on Gebhardt et al. MAD-MEX: Automatic Wall-to-Wall Land Cover Monitoring for the Mexican REDD-MRV Program Using All Landsat Data. Remote Sens. 2014, 6, 3923–3943
Next Article in Special Issue
Do Red Edge and Texture Attributes from High-Resolution Satellite Data Improve Wood Volume Estimation in a Semi-Arid Mountainous Region?
Previous Article in Journal
Spatial Autocorrelation and Uncertainty Associated with Remotely-Sensed Data
Previous Article in Special Issue
Sensitivity of L-Band SAR Backscatter to Aboveground Biomass of Global Forests
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Increasing the Accuracy and Automation of Fractional Vegetation Cover Estimation from Digital Photographs

1
Department of Physics, The University of the West Indies, Mona, Jamaica
2
Central Great Plains Research Station, USDA-ARS, Akron, CO 80720, USA
3
Department of Life Sciences, The University of the West Indies, Mona, Jamaica
*
Author to whom correspondence should be addressed.
Submission received: 4 April 2016 / Revised: 13 May 2016 / Accepted: 23 May 2016 / Published: 23 June 2016
(This article belongs to the Special Issue Remote Sensing of Vegetation Structure and Dynamics)

Abstract

:
The use of automated methods to estimate fractional vegetation cover (FVC) from digital photographs has increased in recent years given its potential to produce accurate, fast and inexpensive FVC measurements. Wide acceptance has been delayed because of the limitations in accuracy, speed, automation and generalization of these methods. This work introduces a novel technique, the Automated Canopy Estimator (ACE) that overcomes many of these challenges to produce accurate estimates of fractional vegetation cover using an unsupervised segmentation process. ACE is shown to outperform nine other segmentation algorithms, consisting of both threshold-based and machine learning approaches, in the segmentation of photographs of four different crops (oat, corn, rapeseed and flax) with an overall accuracy of 89.6%. ACE is similarly accurate (88.7%) when applied to remotely sensed corn, producing FVC estimates that are strongly correlated with ground truth values.

Graphical Abstract

1. Introduction

Fractional vegetation cover (FVC), which is defined as the vertical projection of foliage onto a horizontal surface, is an important measure of crop development [1]. FVC can be used as direct input to crop models or as a predictor of crop yield, above-ground biomass and plant nutritional status [2,3,4,5,6,7]. An advantage that FVC holds over other measures of crop development, such as Leaf Area Index (LAI), is that it can be estimated from the analysis of digital photographs. This holds the potential for a simple, low cost, approach to measuring crop development.
The segmentation of digital photographs is becoming increasingly important in agriculture. Applications include vegetation monitoring [8,9], estimation of LAI [10,11,12], plant nutritional status [6,7,13], fractional vegetation cover measurement [1,14,15,16,17], growth characteristics [18], weed detection [19] and crop identification [20]. Determining FVC from digital photographs is often simpler, faster and more economical than measuring LAI [1,15,17]. FVC values derived from ground and near-ground remotely sensed images for validation of FVC estimated from satellite images is vital in ensuring the quality of FVC estimates derived from satellite images [9,21,22,23,24,25]. However, there are often significant problems with current FVC estimation methodologies. Owing to a lack of automation, the approaches are sometimes very time-consuming. Additionally, they may require the user to have knowledge of image processing techniques. Most importantly, current techniques generally lack sufficient accuracy to be useful [16,26].
Image segmentation for fractional vegetation cover estimation typically utilizes one of two approaches, either a threshold-based approach, or a machine learning approach. Examples of threshold-based approaches include the Color Index of Vegetation Extraction (CIVE) [27], Excess green and Otsu’s method (ExG & Otsu) [28] and Green Crop Tracker [10]. Both CIVE and ExG & Otsu manipulate the values of the RGB channels of an image in order to distinguish pixels dominated by green crop canopy from soil and background pixels. Otsu’s method [29] is then applied in order to extract the fractional vegetation cover. The Green Crop Tracker automatically derives estimates of LAI and FVC from digital photographs of growing crops. The software estimates fractional vegetation cover by segmenting green canopy from soil and other background material using a histogram-based threshold technique applied to the RGB values of the photographs. Limitations of these approaches include an underperformance of the segmentation algorithm when the canopy is close to closure and less accurate results in varying plot illumination [10,30]. SHAR-LABFVC [31] is an algorithm developed to deal with the effect of shadows in classification. The algorithm converts a brightness-enhanced image to the Commission Internationale de l’Éclairage (CIE) L*a*b* space and fits a log-normal distribution to the greenness values and a Gaussian to the background values. Threshold determination assumes that misclassification of vegetation and background is approximately equal; the segmentation threshold is chosen to ensure this.
Several machine learning approaches have been developed to estimate fractional vegetation cover. The environmentally adaptive segmentation (EASA) employed an adaptive, partially supervised clustering process coupled with a Bayesian classifier to achieve segmentation of RGB images [32]. The segmentation method proposed by [12] proceeds with the supervised selection of a region of an RGB image, which is converted to CIE L*a*b* space. K-means is used to cluster the sub-image into plant canopy and background. The clusters representing plant canopy are then used to train an artificial neural network to segment images into plant canopy and background. Bai et al. employed morphology modeling in their development of a crop model for rice [30]. Digital photographs of rice plants are divided into blocks and manually segmented into plant canopy and background. The manually segmented images are converted to CIE L*a*b* space, and the distributions of color at different levels of illuminance are generated and morphological operations (dilation and erosion) are applied, resulting in a color model that can be used to segment rice plants from photographs taken under variable lighting conditions.
While the machine learning approaches give good results, there are some limitations. Firstly, they are typically supervised methods requiring varying levels of human intervention, both for model training and for testing. For instance, the method of Bai et al. [30] requires that a separate model be developed for each crop type that is to be analyzed. Secondly, they tend to employ computationally intensive processes, which may limit the scope of their use.
The objective of this study was to evaluate the performance of a novel, unsupervised, threshold-based, segmentation technique, the Automated Canopy Estimator (ACE) that overcomes many of the aforementioned challenges to produce accurate estimates of fractional vegetation cover for both terrestrial and remotely sensed photographs. The following section describes the technique. Section 3 evaluates the performance of ACE versus other methods using datasets also detailed in Section 2. Section 4 discusses the value of the new approach. Thereafter, a conclusion is offered in Section 5.

2. Materials and Methods

2.1. Data Collection

The digital photographs used in the study were taken during previously conducted field studies. Four separate crops (seeding rates and row spacing given in parentheses) were included in the analysis: corn: Zea mays L. (34,600 seeds/ha; 76 cm), oat: Avena sativa L. (94.0 kg/ha; 20 cm), rapeseed: Brassica napus L. (7.4 kg/ha; 20 cm) and flax: Linum usitatissimum L. (39.2 kg/ha; 20 cm).
The oat, rapeseed, and flax data were collected from field studies conducted at the United States Department of Agriculture, Agricultural Research Service (USDA-ARS) Central Great Plains Research Station (40°09′N, 103°09′W, 1383 m elevation above sea level) located near Akron, CO, USA [33]. The crops were planted no-till into proso millet stubble on 4 April 2013. Plot sizes were 6.1 by 12.2 m. The plot area was sprayed with glyphosate prior to planting and fertilized with 34 kg N/ha. Hand-weeding was performed twice early in the growing season. Corn was grown in a separate study (details given in [10]) and planted in early May in 2010 and 2011 at both the Akron site and at Sidney, NE, USA (41°12′N, 103°0′W, 1315 m above mean sea level). Soil type at both locations was classified as silt loam (Aridic Argiustolls). Crops in both studies were grown under both dryland (rain-fed) and limited supplemental irrigation conditions.
Photographs were taken with a digital camera held level with the horizon and at arm’s length to the south of the photographer at midday. Owing to the differing growth patterns and the varying stages of crop growth, the depth of shadow varies from none to deep shadow. Height above the soil surface was approximately 1.5 m. For photographs of corn taken after 12 July, the photographer climbed a stepladder to get above the canopy. Two different cameras were used for taking the photographs. Inexpensive (<USD $175) digital cameras were used—for corn: a Panasonic Lumix DMC-FS3 and a Panasonic DMC-FP1 (image specifications for both Panasonic cameras were JPEG image format, 2048 by 1536 pixels, 180 ppi). For all other crops, the DMC-FP1 was used.
Additional photographs of corn were taken to compare the analyses of the hand-held digital photographs with analyses of images acquired from a true remote sensing platform. These additional images were obtained from irrigated and rain-fed corn plots at the USDA-ARS Limited Irrigation Research Farm (40°27′N, 104°38′W, 1427 m elevation above sea level) located near Greeley, CO, USA [34]. The corn was planted at 85,500 seeds/ha with 76-cm row spacing on 15 May 2015. Plot size was 9.0 by 43.0 m. A total of 139 kg N/ha fertilizer was applied throughout the growing season. The soil type at this site was classified as sandy loam (Ustic Haplargids). The camera used in this study was a Canon EOS 50D, which produced JPEG images (2352 by 1568 pixels). The camera was attached to a tractor-mounted boom and raised to about 7 m above the soil surface. Thermal images were taken around solar noon on seven dates from 22 June to 1 October. Each image covered an area of about 5.9 m × 4.2 m over the six center rows of each plot.

2.2. Image Segmentation

This paper proposes a new approach to the automated estimation of FVC from digital photographs. The technique converts a digital photograph from RGB to the CIE L*a*b* color space. This color space was developed by the CIE based on human perception of color. A clear benefit of working in the CIE L*a*b* is that it is device-independent [35], therefore a single approach can be used in the segmentation of images captured using different devices, without the negative effects of differing color representations. Colors are represented using three values: the L* value represents the brightness of the image; the a* value represents color on the red-green axis, while the b* value represents color along the blue-yellow axis. By converting to the L*a*b* space, the luminescence of the image is separated from the color information, thus reducing the impact of excessive brightness on the segmentation process. The algorithm extracts and processes the a* channel in order to segment the image into green canopy and background (background crop residues or soil).

Threshold Determination

Figure 1 shows a flowchart detailing the threshold determination of the ACE algorithm. The a* values for a file are compiled and a histogram is plotted. Histogram values are smoothed using a kernel-based method in order to remove spurious peaks from the distribution. Figure 2 shows the smoothed histogram for an image of flax (Figure 2a) in which the a* values representing green canopy and background are both well represented (Figure 2b). The distribution is bimodal, with distinct regions for green colored pixels (to the left) and for background pixels (to the right). In the ideal case, this bimodality is clearly seen in the distribution; however, this is not always so, for instance, when the canopy is near closed or where fractional vegetation cover is sparse. In these circumstances, a measure of the underlying bimodality of the distribution must be determined. This is achieved by fitting a Gaussian mixture model (GMM) to the smoothed data. The use of Gaussians is motivated by the observation that the distributions of vegetation and background colors are approximately normal. The GMM is expressed as:
g ( x | μ i ,   Σ i ) = i = 1 N w i 1 ( 2 π ) 1 2 | Σ i | 1 2 e 1 2 ( x μ i ) T Σ i 1 ( x μ i ) ,
where x is a vector of a* color values, μ i ,     and     Σ i are the means and covariances of each Gaussian, respectively. The contribution of the ith Gaussian is determined by the weight wi. For the bimodal distribution, N is set to 2. The GMM is estimated using the non-linear iterative Nelder–Mead simplex optimization algorithm [36]. The Nelder–Mead algorithm is a direct search algorithm that does not use derivatives, so it can be used for discontinuous functions. The algorithm takes an initial solution set and forms a simplex. The function is evaluated at each point of the simplex and minimized by shifting (reflecting, shrinking or expanding) the vertices of the simplex. The algorithm iteratively seeks an improved estimate, based on a specified tolerance.
A threshold, Tgb, for segmentation is determined automatically by detecting the lowest point of intersection between the peaks of the distribution (see Figure 2). An image is segmented by marking the pixels with a* color values less than or equal to Tgb as belonging to the canopy and all other pixels as background. Fractional vegetation cover is estimated as the ratio of the number pixels in the canopy to the total number of pixels in the photograph.
In some instances, the bimodality is not revealed by mixture modeling. For example, in Figure 3a, the shadowed areas beneath a corn canopy obscure some regions of soil. The histogram in Figure 3b reveals the lack of obvious bimodality in the distribution. A generalized thresholding technique would fail to discriminate between green canopy and shadow, leading to under-segmentation of the image and an overestimate of FVC. While the bimodality is not evident, the assumption that the distribution is in fact bimodal remains valid as the shadowed sections of the image simply act to flatten the second peak of the distribution. In ACE, these occurrences are dealt with by finding the inflection point equivalent to the point at which the two underlying peaks intersect. The inflection point is detected by first taking the difference between adjacent points of the distribution. The vector of differences shows a series of peaks and troughs, indicating the number and direction of the inflection points found in the color value distribution (see Figure 3c, for example). Inflection points mark the place where the direction of the distribution changes from clockwise to anti-clockwise, or vice versa. These directional changes are indicated by peaks and troughs in the difference vector; a trough represents a change from clockwise to anti-clockwise, while a peak represents the reverse. The second step involves the detection of the peak in the difference vector that identifies the correct inflection point, viz., which is indicative of the threshold. With an almost closed canopy, the inflection point is found to the right of the main peak in the distribution. The second peak in the difference vector, which corresponds to an anti-clockwise to clockwise direction change, indicates the position of the threshold (see Figure 3, for example). When the photo shows mostly background, the inflection point is found to the left of the main peak in the smoothed distribution. In this case, the first trough in the difference vector, which identifies the position of the clockwise to anti-clockwise direction change, indicates the position of the segmentation threshold.

3. Results

3.1. Image Segmentation

One of the aims of using image analysis techniques for segmentation is to develop an automated, low cost, fast and replicable approach to the segmentation of vegetation cover from digital photographs. Using ACE, segmentation was performed on all four crops over a range of growing conditions and canopy development. Representative examples of segmentation for the four crops are shown in Figure 4; Figure 4a,c,e,g show the original terrestrial photographs of corn, flax, rapeseed and oat, respectively. The corresponding segmented images are shown in Figure 4b,d,f,h. Remotely sensed corn and its segmentation are shown in Figure 4i,j. Examination of the segmentation results shows the utility of ACE for performing accurate segmentation of vegetation from backgrounds under varying conditions. The figures all have regions of shadow that can lead to misclassification of background pixels, especially for ocular segmentation. The corresponding segmentations show the accurate exclusion of the shadowed areas from the vegetation by ACE. ACE also successfully compensated for variations in lighting and quality of individual images.
Automation of the segmentation process removes the component of human error and ensures the replicability of the results. The technique works for the entire range of fractional vegetation cover, from fully open to fully closed. Segmentation of each image takes less than a second on a computer with a 2.2 GHz CPU and 8 GB of memory, compared with just under five seconds for SHAR-LAB and approximately two minutes for a supervised technique such as SamplePoint [37] (http://samplepoint.org/), in which 64 randomly selected points per image are classified by a human observer. Though the images were captured with two separate cameras, the segmentation was consistent across the dataset, a result that has not been achieved in previous studies (see [10,15] for example).

3.2. Fractional Vegetation Cover Estimation

Twenty images of each crop, a total of 80 images, were selected for testing. An additional six per crop were used as a development set. The images cover the full range of fractional vegetation cover from open to near closed. Additionally, the crops present a challenge for segmentation as they have different leaf shapes, sizes and orientations and a variety of growth patterns while being photographed under varying lighting conditions.

3.2.1. Ground Truth Image Segmentation

Ground truth segmentation values were estimated by two individuals using Photoshop (Adobe Systems Incorporated, San Jose, CA, USA), each pixel was classified as either leaf or soil/crop residue. FVC percentage was calculated as the fraction of pixels classified as part of the canopy. These values of FVC were taken as ground truth despite the known limitations of the technique: user bias, age-related color perception challenges and other natural variation between users.

3.2.2. Comparison of Image Segmentation Techniques

In order to validate the accuracy of the segmentations generated by ACE, comparisons were made with nine other segmentation algorithms. The algorithms include eight threshold based approaches CIVE [27], ExG & Otsu [28], visible vegetation index (VVI), Mean Shift—MS, MSCIVE and MSVVI [38], SHAR-LABFVC [31] and one machine learning approach [30]. Segmentation accuracy was determined using the following measure:
accuracy = 100   ×   ( | A B | | A B | ) ,
where A represents the set of pixels in the ground truth image that is marked as crop canopy and B represents the set of pixels in the segmentation that is marked as crop canopy. This measure of accuracy determines how closely the segmentation matches the ground truth, with 100% indicating an exact match and perfect segmentation. Table 1 lists mean accuracies and standard deviations for all nine segmentation algorithms.
ACE outperforms all nine segmentation algorithms evaluated on the current dataset. The improvement over Bai et al. (between 1.2% and 5.4%) and SHAR-LAB (between 2.2% and 8.5%) indicates the level of performance ACE achieves on a challenging dataset. One-way ANOVA analysis showed that the segmentation results for rapeseed, oat and flax were significantly (α = 0.05) better than all algorithms except for Bai et al. and SHAR-LAB. Standard devitions were also quite low; for corn, oat, and flax, ACE had the lowest standard deviations across the test set, except for rapeseed. For rapeseed, CIVE and ExG have lower standard deviations; however, both have very low accuracy and their low standard deviations suggest that the accuracies are low across the test set. When accuracies of all the crops are combined, ANOVA testing confirms that ACE produces the most accurate segmentations across the dataset (p = 0.05). While ACE produced more accurate results than Bai et al. overall (average 89.6% compared to 86.0%), the two are statistically similar. ACE also produces estimates with the lowest variance, showing a high degree of consistency in the estimates across all crops. The boxplot (Figure 5) highlights this result graphically.

3.2.3. Fractional Vegetation Cover Estimates

The accuracy of FVC values estimated by ACE and the other nine techniques is evaluated by comparing the estimated FVC values with the corresponding ground truth values. Table 2 lists the root mean square error (RMSE) and standard deviation of FVC for all algorithms. Over the wide range of FVC values observed, there was no systematic error in the ACE estimation process, with some values being overestimated and others underestimated. Overall, ACE produces the most accurate estimates of FVC as evidenced by the RMSE and standard deviation values in Table 2. The low standard deviation highlights the consistency of the FVC values estimated by ACE. ACE outperforms the other algorithms for oat and rapeseed; the RMSE values are 5.8% (σ = 6.0%) and 5.3% (σ = 4.3%), respectively. Ninety percent of the errors for oat and 85% for rapeseed fall within one standard deviation of the mean, suggesting a high level of agreement between the estimates generated by ACE and the ground truth values. The errors occur for both high and low fractional vegetation cover values, thus reinforcing the overall lack of bias. ACE and Bai et al. have a similar RMSE for rapeseed, but the standard deviation for ACE is lower, suggesting a more consistent FVC estimation. Bai et al. produces estimates with marginally better RMSE values for corn and flax. The standard deviations are identical for corn, while, for flax, the standard deviation is lower for Bai et al.

3.2.4. Segmentation and FVC Estimates for Remotely Sensed Corn

The ten algorithms were used to perform segmentation on several images of corn taken by a camera suspended from a boom arm (see Section 2 for details of data collection). The mean segmentation accuracy of ACE was 88.7%, with a standard deviation of 5.4%, which is the highest accuracy and lowest standard deviation of the algorithms tested (see Table 3 and Figure 4). This high level of segmentation accuracy and low standard deviation is consistent with the accuracies obtained with terrestrial photographs of corn in Section 3.2.3. Compared to the ground truth values, the FVC estimates resulting from the ACE segmentation process had RMSE and standard deviation of 4.1 and 4.1, respectively (Table 3). The FVC values produced by ACE were only fractionally better than those produced by SHAR-LAB (by 0.4% and 0.1% RMSE and standard deviation, respectively), but much better than the other eight algorithms, including Bai et al. [30].

4. Discussion

4.1. Comparative Advantages of ACE

ACE offers an accurate, low-cost approach to FVC estimation that automates the process of segmentation, eliminates human subjectivity and provides a more efficient means of surface characterization, which closely matches the ground truth. Consistency in estimation across the four crop types evaluated suggests ease of application to other crops.
The current approach appears to overcome many of the limitations faced by other techniques that estimate FVC. For example, ACE is able to accurately estimate FVC when the canopy is of any size compared to the background, from small to near to closure. Unlike the other algorithms presented, ACE maintains its accuracy even when the photographs are taken with multiple cameras—this increases options for inexpensive data capture and in different lighting conditions with varying degrees of shadowing. Accuracy, flexibility, full automation and speed are properties of ACE that are improvements over state-of-the-art algorithms.
ACE performs well in cases where the distribution of color values is not bimodal because it is flexible enough to correctly estimate segmentation thresholds even where two peaks are not visible. The test set had several such examples, and, in those cases, ACE outperformed the other algorithms. For instance, the explicit reliance on bimodality and the assumption that errors in classification are equal for the vegetation and background [31] led to under-segmentation and the inclusion of shaded background regions by SHAR-LAB.

4.2. Implications for Remote Sensing

Testing with remotely sensed corn produced an accuracy of 88.7%, similar to the accuracy obtained with terrestrial images and more accurate than the other techniques. The low standard deviation points to the consistency of the segmentations produced by ACE. The FVC estimates were similarly close to the ground truth values with approximately 4% error on average. ACE outperformed all the other algorithms, though the RMSE and its standard deviation are both statistically similar to those produced by SHAR-LAB.
The accuracy and flexibility of the proposed algorithm point to its potential for use in more remote sensing applications than the one currently evaluated. In particular, the ability to accurately segment images containing significant shadowing of the background is seen as beneficial for remotely captured images. This functionality is important when validating the FVC values estimated from airborne or spaceborne platforms. The fact that ACE has been shown to accurately estimate the FVC of different crop types from photographs taken at heights of up to 7 m emphasizes its utility as more than a ground-based FVC estimation tool. The use of near-ground remotely sensed images provides the opportunity to increase sample collection, survey efficiency, spatial and temporal resolution as well as decrease the unit cost of sampling [22,25,39,40]. ACE, therefore, makes a contribution to the existing approach and offers the prospect of advancing the emerging approach of near-ground remote sensing.
The variety of crops tested and the resulting segmentation accuracy indicates that ACE can work with a range of crops with different growth patterns and leaf geometries. This is a distinct advantage when validating FVC derived from remotely sensed images. It has been shown that the validation process is more time consuming and complex, requiring more than one estimation algorithm, when the FVC estimators are limited in the range of crops for which they produce accurate results [41].
ACE is the most consistently accurate of the algorithms tested. The results of tests on both terrestrial and remotely sensed crops show that the software produces similarly accurate segmentations and FVC values. This suggests that ACE is able to overcome issues specific to remotely sensed crops, and thus it shows great potential for use in remote sensing applications.

4.3. Implications for Senescent Crop Cover

After the onset of senescence, there are no easy ways to measure fractional vegetation cover due to the dual coloration of leaves at this stage and intermingling [42]. Preliminary analyses (data not presented) suggest that ACE can be used to distinguish between senescent and green vegetation, thus obviating the need to employ expensive, time-consuming methods to first calculate LAI and then convert to FVC. Further research is required to validate the use of ACE to reduce the difficulties associated with measuring fractional vegetation cover during senescence.

4.4. Application of ACE to Crop Modelling

Accurate estimates of FVC are required for crop simulation models. For models such as AquaCrop (Food and Agriculture Organization of the United Nations, Rome, Italy, http://www.fao.org/nr/water/aquacrop.html) (an FAO model largely developed in recognition of impending food crisis in developing countries), canopy development is characterized by FVC only and not the more common leaf area index (LAI) [2,42]. It has recently been shown that, because ACE provides more accurate estimates of FVC than segmentation tools previously used, improved model outputs are obtained [43].
There are, as a result, implications for the wider use of crop modelling as a tool in the prediction of the effects of climate change on crop production in the Caribbean and regions with similar small island developing states. With access to accurate values of FVC, existing relationships between FVC and LAI can be validated or refined [11], and new relationships can be developed for crop species that have not yet been studied. This is especially important for foods, such as sweet potato (Ipomoea batatas L.), that are central to food and nutritional security. As there are no existing databases of FVC values for many of these crops, ACE will be useful in providing either FVC or an estimate of LAI values for use with crop models. Therefore, the results of such studies would give access to currently unavailable parameters and could increase the use of crop models to optimize agricultural crop production and policy.

5. Conclusions

ACE is an accurate, automated method of segmenting both terrestrial and remotely sensed digital photographs, and thus estimating fractional vegetation cover. Evaluations of the accuracy of the method were successfully conducted using four different crops—corn, rapeseed, oat and flax—and were compared to nine other segmentation algorithms. The ACE algorithm was then applied to the terrestrial and remotely sensed photographs to produce estimates of FVC from the segmentation. For all the crops in the study, ACE either outperformed or matched the FVC estimation of all the other algorithms, including the state-of-the-art machine learning algorithm.
Such an accurate, fast and automated method for estimating FVC from digital photographs is potentially beneficial for many applications, including crop modelling. With improved accuracy, existing relationships between FVC and LAI can be validated or refined and new relationships developed for crops that are yet to be studied. Automation facilitates easier measurement of a crop development parameter, thereby removing one of the perceived barriers to calibrating and using crop models, particularly within regions like the Caribbean.

Software Availability

ACE is available online at http://173.230.158.211/.

Acknowledgments

This research has been produced with the financial assistance of the European Union. The contents of this document are the sole responsibility of the University of the West Indies and its partners in the Global-Local Caribbean Climate Change Adaptation and Mitigation Scenarios (GoLoCarSce) project and can under no circumstances be regarded as reflecting the position of the European Union. The authors would also like to thank X.D. Bai and Xihan Mu for providing their algorithms for comparison and K.C. DeJonge for providing the remotely sensed corn images. The authors would also like to thank the anonymous reviewers for their comments and suggestions, which have served to significantly improve the paper.

Author Contributions

André Coy developed ACE and wrote the first draft of the manuscript. Dale Rankine gave advice during the development of ACE and contributed to the development and revision of the manuscript. Michael Taylor and Jane Cohen supervised the project that led to the development of the algorithm and made significant contributions to the development of the manuscript. David Nielsen provided the images for analysis and contributed to the revision of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fiala, A.C.S.; Garman, S.L.; Gray, A.N. Comparison of five canopy cover estimation techniques in the western Oregon Cascades. For. Ecol. Manag. 2006, 232, 188–197. [Google Scholar] [CrossRef]
  2. Steduto, P.; Hsaio, T.C.; Raes, D.; Fereres, E. AquaCrop—The FAO model to simulate yield response to water. I. Concepts and underlying principles. Agron. J. 2009, 101, 426–437. [Google Scholar] [CrossRef]
  3. Behrens, T.; Diepenbrock, W. Using digital image analysis to describe canopies of winter oilseed rape (Brassica napus L.) during vegetative developmental stages. J. Agron. Crop Sci. 2006, 192, 295–302. [Google Scholar] [CrossRef]
  4. Pan, G.; Li, F.; Sun, G. Digital camera based measurement of crop cover for wheat yield prediction. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; pp. 797–800.
  5. Richardson, M.D.; Karcher, D.E.; Purcell, L.C. Quantifying turfgrass cover using digital image analysis. Crop Sci. 2001, 4, 1884–1888. [Google Scholar] [CrossRef]
  6. Jia, L.; Chen, X.; Zhang, F. Optimum nitrogen fertilization of winter wheat based on color digital camera image. Commun. Soil Sci. Plant Anal. 2007, 38, 1385–1394. [Google Scholar] [CrossRef]
  7. Li, Y.; Chen, D.; Walker, C.N.; Angus, J.F. Estimating the nitrogen status of crops using a digital camera. Field Crop. Res. 2010, 118, 221–227. [Google Scholar] [CrossRef]
  8. Sakamoto, T.; Gitelson, A.A.; Nguy-Robertson, A.L.; Arkebauer, T.J.; Wardlow, B.D.; Suyker, A.E.; Verma, S.B.; Shibayama, M. An alternative method using digital cameras for continuous monitoring of crop status. Agric. For. Meteorol. 2012, 154–155, 113–126. [Google Scholar] [CrossRef]
  9. Ding, Y.; Zheng, X.; Zhao, K.; Xin, X.; Liu, H. Quantifying the impact of NDVIsoil determination methods and NDVIsoil variability on the estimation of fractional vegetation cover in Northeast China. Remote Sens. 2016, 8, 29. [Google Scholar] [CrossRef]
  10. Liu, J.; Pattey, E. Retrieval of leaf area index from top-of-canopy digital photography over agricultural crops. Agric. For. Meteorol. 2010, 150, 1485–1490. [Google Scholar] [CrossRef]
  11. Nielsen, D.; Miceli-Garcia, J.J.; Lyon, D.J. Canopy cover and leaf area index relationships for wheat, triticale, and corn. Agron. J. 2012, 104, 1569–1573. [Google Scholar] [CrossRef]
  12. Córcoles, J.I.; Ortega, J.F.; Hernández, D.; Moreno, M.A. Estimation of leaf area index in onion (Allium cepa L.) using an unmanned aerial vehicle. Biosyst. Eng. 2013, 115, 31–42. [Google Scholar] [CrossRef]
  13. Wang, Y.; Wang, D.J.; Zhang, G.; Wang, J. Estimating nitrogen status of rice using the image segmentation of G-R thresholding method. Field Crop. Res. 2013, 149, 33–39. [Google Scholar] [CrossRef]
  14. Booth, D.T.; Cox, S.E.; Fifield, C.; Phillips, M.; Williamson, N. Image analysis compared with other methods for measuring ground cover. Arid Land Res. Manag. 2005, 19, 91–100. [Google Scholar] [CrossRef]
  15. Guevara-Escobar, A.; Tellez, J.; Gonzalez-Sosa, E. Use of digital photography for analysis of canopy closure. Agrofor. Syst. 2005, 65, 1–11. [Google Scholar] [CrossRef]
  16. Laliberte, A.S.; Rango, A.; Herrick, J.E.; Fredrickson, E.L.; Burkett, L. An object-based image analysis approach for determining fractional cover of senescent and green vegetation with digital plot photography. J. Arid Environ. 2007, 69, 1–14. [Google Scholar] [CrossRef]
  17. Lee, K.; Lee, B. Estimating canopy cover from color digital camera image of rice field. J. Crop Sci. Biotechnol. 2011, 14, 151–155. [Google Scholar] [CrossRef]
  18. Yu, Z.-H.; Cao, Z.-G.; Wu, X.; Bai, X.; Qin, Y.; Zhuo, W.; Xiao, Y.; Zhang, X.; Xue, H. Automatic image-based detection technology for two critical growth stages of maize: Emergence and three-leaf stage. Agric. For. Meteorol. 2013, 174, 65–84. [Google Scholar] [CrossRef]
  19. Sui, R.X.; Thomasson, J.A.; Hanks, J.; Wooten, J. Ground-based sensing system for weed mapping in cotton. Comput. Electron. Agric. 2008, 60, 31–38. [Google Scholar] [CrossRef]
  20. Abbasgholipour, M.; Omid, M.; Keyhani, A.; Mohtasebi, S.S. Color image segmentation with genetic algorithm in a raisin sorting system based on machine vision in variable conditions. Expert Syst. Appl. 2011, 38, 3671–3678. [Google Scholar] [CrossRef]
  21. Przeszlowska, A.; Trlica, M.; Weltz, M. Near-ground remote sensing of green area index on the shortgrass prairie. Rangel. Ecol. Manag. 2006, 59, 422–430. [Google Scholar] [CrossRef]
  22. Zelikova, T.J.; Williams, D.G.; Hoenigman, R.; Blumenthal, D.M.; Morgan, J.A.; Pendall, E. Seasonality of soil moisture mediates responses of ecosystem phenology to elevated CO2 and warming in a semi-arid grassland. J. Ecol. 2015, 103, 1119–1130. [Google Scholar] [CrossRef]
  23. Mu, X.; Hu, M.; Song, W.; Ruan, G.; Ge, Y.; Wang, J.; Huang, S.; Yan, G. Evaluation of sampling methods for validation of remotely sensed fractional vegetation cover. Remote Sens. 2015, 7, 16164–16182. [Google Scholar] [CrossRef]
  24. Ding, Y.; Zheng, X.; Jiang, T.; Zhao, K. Comparison and validation of long time serial global GEOV1 and regional Australian MODIS fractional vegetation cover products over the Australian continent. Remote Sens. 2015, 7, 5718–5733. [Google Scholar]
  25. Chianucci, F.; Disperati, L.; Guzzi, D.; Bianchini, D.; Nardino, V.; Lastri, C.; Rindinella, A.; Corona, P. Estimation of canopy attributes in beech forests using true colour digital images from a small fixed-wing UAV. Int. J. Appl. Earth Obs. Geoinf. 2016, 47, 60–68. [Google Scholar] [Green Version]
  26. Booth, D.T.; Cox, S.E.; Meikle, T.W.; Fitzgerald, C. The accuracy of ground-cover measurements. Rangel. Ecol. Manag. 2006, 59, 179–188. [Google Scholar]
  27. Kataoka, T.; Kaneko, T.; Okamoto, H.; Hata, S. Crop growth estimation system using machine vision. In Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003), Kobe, Japan, 20–24 July 2003; Volume 2, pp. b1079–b1083.
  28. Neto, J.C. A Combined Statistical-Soft Computing Approach for Classification and Mapping Weed Species in Minimum-Tillage Systems. Ph.D. Thesis, University of Nebraska, Lincoln, NE, USA, 2006. [Google Scholar]
  29. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar]
  30. Bai, X.D.; Cao, Z.G.; Wanga, Y.; Yua, Z.H.; Zhang, X.F.; Li, C.N. Crop segmentation from images by morphology modeling in the CIE L*a*b* color space. Comput. Electron. Agric. 2013, 99, 21–34. [Google Scholar]
  31. Song, W.; Mu, X.; Yan, G.; Huang, S. Extracting the green fractional vegetation cover from digital images using a shadow-resistant algorithm (SHAR-LABFVC). Remote Sens. 2015, 7, 10425–10443. [Google Scholar]
  32. Tian, L.F.; Slaughter, D.C. Environmentally adaptive segmentation algorithm for outdoor image segmentation. Comput. Electron. Agric. 1998, 21, 153–168. [Google Scholar]
  33. Nielsen, D.C.; Lyon, D.J.; Hergert, G.W.; Higgins, R.K.; Calderon, F.J.; Vigil, M.F. Cover crop mixtures do not use water differently than single-species plantings. Agron. J. 2015, 107, 1025–1038. [Google Scholar] [CrossRef]
  34. DeJonge, K.C.; Taghvaeian, S.; Trout, T.J.; Comis, L.H. Comparison of canopy temperature-based water stress indices for maize. Agric. Water Manag. 2015, 156, 51–62. [Google Scholar]
  35. Tkalĉiĉ, M.; Tasiĉ, J.F. Colour spaces: Perceptual, historical and applicational background. In Proceedings of the IEEE Region 8 EUROCON 2003, Computer as a Tool, Ljubljana, Slovenia, 22–24 September 2003; Volume 1, pp. 304–308.
  36. O’Haver, T. Peak Finding and Measurement. Custom Scripts for the MATLAB Platform. 2006. Available online: http://www.wam.umd.edu/~toh/spectrum/PeakFindingandMeasurement.htm (accessed on 15 June 2014).
  37. Booth, D.T.; Cox, S.E.; Berryman, R.D. Point sampling digital imagery with ‘SamplePoint’. Environ. Monit. Assess. 2006, 123, 97–108. [Google Scholar]
  38. Ponti, M.P. Segmentation of low-cost remote sensing images combining vegetation indices and mean shift. IEEE Geosci. Remote Sens. Lett. 2013, 10, 67–70. [Google Scholar]
  39. Seefeldt, S.S.; Booth, D.T. Measuring plant cover in sagebrush steppe rangelands: A comparison of methods. Environ. Manag. 2006, 37, 703–711. [Google Scholar]
  40. Booth, D.T.; Cox, S.E.; Meikle, T.; Zuuring, H.R. Ground-cover measurements: Assessing correlation among aerial and ground-based methods. Environ. Manag. 2008, 42, 1091–1100. [Google Scholar]
  41. Jia, K.; Liang, S.; Gu, X.; Baret, F.; Wei, X.; Wang, X.; Yao, Y.; Yang, L.; Li, Y. Fractional vegetation cover estimation algorithm for Chinese GF-1 wide field view data. Remote Sens. Environ. 2016, 177, 184–191. [Google Scholar]
  42. Steduto, P.; Hsiao, T.C.; Fereres, E.; Raes, D. Crop Yield Response to Water; FAO Irrigation and Drainage Paper 66; Food and Agricultural Organization of the United Nations (FAO): Rome, Italy, 2012. [Google Scholar]
  43. Rankine, D.R.; Cohen, J.E.; Taylor, M.A.; Coy, A.D.; Simpson, L.A.; Stephenson, T.; Lawrence, J.L. Parameterizing the FAO AquaCrop model for rainfed and irrigated field-grown sweet potato. Agron. J. 2015, 107, 375–387. [Google Scholar]
Figure 1. Flowchart of the ACE algorithm.
Figure 1. Flowchart of the ACE algorithm.
Remotesensing 08 00474 g001
Figure 2. An image of flax (a) and the smoothed probability distribution of its a* color values (b). Tgb is the threshold for segmentation in the ACE algorithm.
Figure 2. An image of flax (a) and the smoothed probability distribution of its a* color values (b). Tgb is the threshold for segmentation in the ACE algorithm.
Remotesensing 08 00474 g002
Figure 3. An image of corn (a), its smoothed probability distribution (b), and the difference of probability values (c). The inflection point in subfigure (a) (marked as Tgb) indicates the segmentation value. Tgb is detected as the second peak in the difference curve (subfigure (a)).
Figure 3. An image of corn (a), its smoothed probability distribution (b), and the difference of probability values (c). The inflection point in subfigure (a) (marked as Tgb) indicates the segmentation value. Tgb is detected as the second peak in the difference curve (subfigure (a)).
Remotesensing 08 00474 g003aRemotesensing 08 00474 g003b
Figure 4. (aj) Image segmentation using ACE. Photographs taken with a handheld digital camera along with the respective segmentations performed by ACE. Corn (a,b); flax (c,d); rapeseed (e,f) and oat (g,h). Remotely sensed corn, taken from a platform 7 m above the soil surface (i); and its ACE segmentation (j).
Figure 4. (aj) Image segmentation using ACE. Photographs taken with a handheld digital camera along with the respective segmentations performed by ACE. Corn (a,b); flax (c,d); rapeseed (e,f) and oat (g,h). Remotely sensed corn, taken from a platform 7 m above the soil surface (i); and its ACE segmentation (j).
Remotesensing 08 00474 g004
Figure 5. Boxplot of segmentation accuracy of all algorithms tested for the combined dataset. ACE has an overall higher segentation accuracy than all other algorithms. Bai et al. is second best, with an overall accuracy of 86.1%. ACE also has the lowest overall standard deviation across all crop types.
Figure 5. Boxplot of segmentation accuracy of all algorithms tested for the combined dataset. ACE has an overall higher segentation accuracy than all other algorithms. Bai et al. is second best, with an overall accuracy of 86.1%. ACE also has the lowest overall standard deviation across all crop types.
Remotesensing 08 00474 g005
Table 1. Mean segmentation accuracy and standard deviations for the ten segmentation algorithms evaluated. Accuracies are shown for each crop and for all crops combined.
Table 1. Mean segmentation accuracy and standard deviations for the ten segmentation algorithms evaluated. Accuracies are shown for each crop and for all crops combined.
AlgorithmCornOatFlaxRapeseedOverall
µ (%)σ (%)µ (%)σ (%)µ (%)σ (%)µ (%)σ (%)µ (%)σ (%)
CIVE40.018.063.08.060.018.051.01.552.617.3
ExG67.08.058.09.063.016.050.01.559.613.6
VVI30.08.045.09.048.410.039.410.040.010.9
MS35.09.054.07.048.010.043.113.044.211.7
MSCIVE85.46.061.025.074.06.075.410.074.415.8
MSExG85.07.062.025.073.06.076.08.074.415.2
MSVVI32.39.055.07.044.010.042.012.042.511.9
Bai et al.88.05.085.06.484.48.087.08.186.17.0
SHAR-LAB87.05.082.311.681.36.085.010.082.39.0
ACE89.22.689.14.389.85.190.44.589.64.5
Table 2. RMSE and standard deviation of difference between estimated FVC values and the ground truth FVC values for the ten algorithms evaluated.
Table 2. RMSE and standard deviation of difference between estimated FVC values and the ground truth FVC values for the ten algorithms evaluated.
AlgorithmCornOatFlaxRapeseedOverall
RMSEσ (%)RMSEσ (%)RMSEσ (%)RMSEσ (%)RMSEσ (%)
CIVE33.717.417.914.335.014.531.131.830.224.8
ExG11.77.118.67.014.87.433.312.421.311.9
VVI20.017.321.813.815.415.836.129.324.621.3
MS16.816.015.215.319.418.934.331.422.722.8
MSCIVE4.74.829.122.78.68.620.914.318.616.3
MSExG5.75.728.422.88.28.119.712.818.015.6
MSVVI16.616.215.415.418.018.434.731.222.622.5
Bai et al.2.12.28.28.45.64.25.35.05.75.6
SHAR-LAB9.26.416.011.411.66.08.74.711.77.7
ACE2.72.25.86.05.75.85.34.35.04.9
Table 3. Segmentation accuracy; RMSE and standard deviation of difference between estimated FVC values and the ground truth FVC values for the ten algorithms evaluated using a remotely sensed corn crop.
Table 3. Segmentation accuracy; RMSE and standard deviation of difference between estimated FVC values and the ground truth FVC values for the ten algorithms evaluated using a remotely sensed corn crop.
AlgorithmSegmentation AccuracyFVC
µ (%)σ (%)RMSEσ (%)
CIVE53.719.729.922.2
ExG41.018.931.216.5
VVI41.718.820.020.3
MS44.217.622.623.2
MSCIVE65.411.716.817.1
MSExG65.511.515.315.6
MSVVI43.217.722.322.9
Bai et al.74.917.713.18.4
SHAR-LAB81.111.44.54.2
ACE88.75.44.14.1

Share and Cite

MDPI and ACS Style

Coy, A.; Rankine, D.; Taylor, M.; Nielsen, D.C.; Cohen, J. Increasing the Accuracy and Automation of Fractional Vegetation Cover Estimation from Digital Photographs. Remote Sens. 2016, 8, 474. https://0-doi-org.brum.beds.ac.uk/10.3390/rs8070474

AMA Style

Coy A, Rankine D, Taylor M, Nielsen DC, Cohen J. Increasing the Accuracy and Automation of Fractional Vegetation Cover Estimation from Digital Photographs. Remote Sensing. 2016; 8(7):474. https://0-doi-org.brum.beds.ac.uk/10.3390/rs8070474

Chicago/Turabian Style

Coy, André, Dale Rankine, Michael Taylor, David C. Nielsen, and Jane Cohen. 2016. "Increasing the Accuracy and Automation of Fractional Vegetation Cover Estimation from Digital Photographs" Remote Sensing 8, no. 7: 474. https://0-doi-org.brum.beds.ac.uk/10.3390/rs8070474

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop