Next Article in Journal
A Novel Technique for Modeling Ecosystem Health Condition: A Case Study in Saudi Arabia
Next Article in Special Issue
Mapping Tree Height in Burkina Faso Parklands with TanDEM-X
Previous Article in Journal
Change Detection in Urban Point Clouds: An Experimental Comparison with Simulated 3D Datasets
Previous Article in Special Issue
Determination of Structural Characteristics of Old-Growth Forest in Ukraine Using Spaceborne LiDAR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparison of Multi-Temporal RGB and Multispectral UAS Imagery for Tree Species Classification in Heterogeneous New Hampshire Forests

Department of Natural Resources & the Environment, University of New Hampshire, 56 College Rd, Durham, NH 03824, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(13), 2631; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132631
Submission received: 4 June 2021 / Revised: 28 June 2021 / Accepted: 1 July 2021 / Published: 4 July 2021

Abstract

:
Unmanned aerial systems (UASs) have recently become an affordable means to map forests at the species level, but research into the performance of different classification methodologies and sensors is necessary so users can make informed choices that maximize accuracy. This study investigated whether multi-temporal UAS data improved the classified accuracy of 14 species examined the optimal time-window for data collection, and compared the performance of a consumer-grade RGB sensor to that of a multispectral sensor. A time series of UAS data was collected from early spring to mid-summer and a sequence of mono-temporal and multi-temporal classifications were carried out. Kappa comparisons were conducted to ascertain whether the multi-temporal classifications significantly improved accuracy and whether there were significant differences between the RGB and multispectral classifications. The multi-temporal classification approach significantly improved accuracy; however, there was no significant benefit when more than three dates were used. Mid- to late spring imagery produced the highest accuracies, potentially due to high spectral heterogeneity between species and homogeneity within species during this time. The RGB sensor exhibited significantly higher accuracies, probably due to the blue band, which was found to be very important for classification accuracy and lacking in the multispectral sensor employed here.

Graphical Abstract

1. Introduction

Detailed maps of forest composition are necessary for effective and efficient forest management [1,2]. Maps depicting species-level composition serve a number of applications, such as monitoring biodiversity [3,4], forest health assessments [5,6], conducting precision forestry [7,8] or as inputs for species-specific allometric models [9]. Remotely sensed imagery has been used to decades as a quick and efficient means to produce continuous, large-area maps of forest types [10,11]. However, traditional remote sensing platforms, such as satellite or aerial imagery, are incapable of providing the temporal and/or spatial resolutions necessary for species level mapping at an affordable cost [12,13]. Thanks to recent technological advancements, unmanned aerial systems (UASs) have become an affordable alternative, capable of providing the flexibility and resolution necessary to accurately map forest species composition [2,14].
UASs are easily capable of providing centimeter-level imagery that can be used to identify individual plants [15,16]. Many studies have now employed UASs to map individual or small groupings of invasive plants [17,18,19,20], shrubs, grasses, and forbs [21,22,23,24,25], as well as wetlands [26,27]. More studies are now evaluating UAS-derived orthomosaics for tree species mapping [6,28,29,30,31]. All are taking advantage of the imagery’s high spatial resolution to distinguish and classify individual trees or small groupings of trees of the same species with positive results.
Besides the significantly higher spatial resolution, the flexibility of the UAS platform is another major characteristic. For one, UAS platforms can be equipped with different sensors capable of acquiring information from different portions of the electromagnetic spectrum (EMS), like the visible bands (RGB), red edge, and near infrared (NIR) [14,32]. Typically, though, cost and payload weight limits the sensor used [14,16,33]. As a result, consumer-grade digital cameras are often employed in UAS studies [21,34,35,36,37]. The downside of employing these cameras, however, is that they ordinarily only capture reflectance in the visible range of the EMS (i.e., RGB cameras). Typically, most land cover classifications, especially with vegetation, require multispectral sensors capable of sensing beyond the visible range of the EMS, frequently in the NIR spectrum, in order to improve the distinction between classes, especially classes that are spectrally similar in the visible range, like vegetation [38,39]. Many studies have modified the spectral sensitivity of the bands in the consumer-grade cameras by adding or removing filters from the camera lens, usually to capture NIR reflectance [6,18,22,40,41]. The modified cameras, however, are not perfect substitutes for real multispectral cameras. All three bands on a consumer-grade camera are sensitive to NIR energy, and thus removal of the filter, blocking NIR energy from reaching the sensor, can cause redundant band sensitivity or spectral overlap between bands. This spectral overlap reduces the potential for discrimination between features. Several studies have found the RGB imagery performed better compared to the CIR imagery from a modified camera [6,28,42] and have suggested that the redundant sensitivity between the bands, after modifying the camera, reduced the ability to discriminate between species using CIR imagery. Franklin et al. (2018) found the imagery collected by an actual multispectral camera outperformed the RGB imagery for tree species mapping. However, multispectral cameras can be more expensive [14,42] and thus more cost-effective methods of accurately generating this information would help make UASs more operationally feasible.
Taking advantage of UASs’ temporal flexibility may help to overcome limitations in sensor spectral resolution [2]. The much higher temporal resolution of the UAS platform is considered one of its major advantages over other remote sensing platforms [1,12,33]. In a multi-temporal classification, multiple dates of imagery are used to create a single land cover map, taking advantage of the spectral differences within and between species during this period to improve the accuracy of the map [43]. With an appropriately timed series of images, multiple species can be differentiated [2]. In highly heterogeneous forests with many species of trees, like those characteristic of New England, spectral separability is crucial [43,44,45].
Several studies have demonstrated the advantages of a multi-temporal classification for mapping forest composition with moderate resolution satellite imagery. However, it should be noted that these studies are typically classifying species mixtures rather than singular species, since the spatial resolution is usually larger than most tree crowns [44,46,47]. The use of high spatial resolution imagery for multi-temporal species classification is uncommon [46,47,48] and the use of very high spatial resolution (sub-meter), non-UAS imagery is scarce, mainly due to the high costs for both [28]. While several studies have taken advantage of the temporal resolution of the UAS for other applications [34,37,49,50,51], few have done so for tree species classification [6,28].
As the availability and access to high, and now very high, spatial resolution imagery has increased, there has been a shift away from traditional per-pixel image processing for detecting and mapping features of interest to an object-based approach [52,53]. Object-based image analysis was a move towards integrating more spatial information into the classification/feature detection process in an effort to try and mimic human photointerpretation [54]. As of late, improvements in computer hardware have made deep learning algorithms, like the popular convolutional neutral networks (CNNs), a viable tool. Deep learning looks to train computers to think like humans and automatically identify features in an image [55]. Deep learning CNNs have performed well with very high resolution imagery but, as pointed out by Bhuiyan et al. [56], can only utilize three spectral bands. Users must typically choose a limited subset of all the available bands [56,57,58], which would limit the use of multi-temporal datasets which contain numerous bands. Furthermore, deep learning approaches perform best with a large quantity of reference data and require substantial computing power [59]. Meanwhile, computationally efficient machine learning algorithms, such as random forest, are readily available in many coding languages, such as R and Python, and have been found to perform well with high-dimensional, multi-temporal datasets [6,28,60,61].
The integration of UASs into the field of remote sensing is very recent and given the inherent differences between UASs and traditional remote sensing platforms/data, there is a need to explore how UASs perform in a variety of applications and environments to better inform end-users on how best to employ them. This study sought to investigate whether multi-temporal classification of RGB and multispectral UAS imagery improved the accuracy of species-level forest composition maps in a highly heterogeneous forest in New Hampshire, USA. Additionally, an optimal phenological window for data collection was investigated and the accuracy of the maps produced from RGB imagery were compared to those produced from the multispectral imagery. This study will inform users on data collection strategies that may help to optimize accuracy in these complex environments.

2. Materials and Methods

2.1. Study Area Description

This study was conducted at Kingman Farm in Madbury, NH, USA (Figure 1). The property is owned by the University of New Hampshire (UNH) and is comprised of both agricultural fields and research support buildings for the NH Agricultural Experiment Station, as well as 101 ha of forest which are managed by the UNH Office of Woodlands and Natural Areas for the purposes of education, research, and conservation. From this point forward, any reference to Kingman Farm, or just Kingman, will be used to indicate the forested lands on the property. The Kingman Farm forests are an example of a hemlock–beech–oak–pine forest community [62], dominated by white pine (Pinus strobus), eastern hemlock (Tsuga canadensis), red maple (Acer rubrum), red oak (Quercus rubra), and American beech (Fagus grandifolia). The land-use history of the property and surrounding region, combined with the ongoing management practices within the woodlot, has resulted in a considerable mix of species. A recent inventory of the property conducted in 2017 as part of the UNH Continuous Forest Inventory (CFI) Program detected 16 different species of trees on the property.
It is important to note several characteristics within the study site that may potentially affect the within-species spectral response. Hemlock woolly adelgid and beech bark disease are widespread throughout the study site. Infected eastern hemlock and American beech trees may exhibit differing spectral patterns compared to uninfected individuals. Additionally, the study site encompasses a range of hydrologic conditions, from dry uplands to permanently saturated swamps. Facultative species like red maple tend to exhibit wide variability in phenology due to their ability to tolerate a multitude of conditions [63].
In order to adhere to Part 107 of the U.S. Federal Aviation Administration Regulations (Small Unmanned Aircraft Systems, 14 C.F.R. Part 107) and to maintain the safety of the research team and others, only a portion of the Kingman Farm was covered by the UAS, as indicated in Figure 1. The far eastern half of the property is classified as Class E to Surface airspace belonging to the Pease International Airport and is off limits to UASs; it was thus removed from the study area. Additional limits were placed on the UAS mission area to ensure the pilot and visual observers could maintain visual line-of-sight as well as a constant radio connection with the UAS while flying.

2.2. UAS Data Collection

All flights were carried out with a Sensefly eBee X fixed-wing UAS and the eMotion 3 mission planning software [64]. Two sensors, the Sensefly Aeria X and the Parrot Sequoia, were flown to collect the RGB and multispectral imagery, respectively. The specifications for each camera are provided in Table 1.
The Aeria X is a standard DSLR camera and employs a common APS-C sensor capable of capturing normal color (RGB) imagery. The Parrot Sequoia is a multispectral sensor specifically designed for vegetation mapping and monitoring. As such, it captures spectral information in the green, red, red edge, and NIR portions of the EMS. While the Sequoia camera does carry an additional RGB sensor, this sensor is not optimized for the generation of the orthomosaics and was not utilized [65].
Imagery was collected over Kingman farm between April 2019 and June 2020. The goal was to fly bi-weekly from the very beginning for the growing season through to the end in order to capture the full phenology of the forest with both sensors. There was a preference to fly on cloudy days to maintain consistent illumination across all the images and to avoid shadows. When not possible, the imagery was collected under clear or nearly clear conditions and as close to solar noon at possible. All missions were undertaken 100 m above the trees (approximately 120 m above the ground) with an 80% latitudinal overlap and an 85% longitudinal overlap. The Sequoia requires an additional radiometric calibration prior to each flight using a calibration target with a known albedo.
Table 2 shows the collection dates for both cameras with a seasonal descriptor. Due to weather, flight constraints, and equipment malfunctions, it was not possible to collect all the imagery within a single growing season. Within-sensor collections were largely within the same year (2019 for the Aeria X and 2020 for the Sequoia), with the exception of the first and last dates of collection for the Aeria X. Every effort was made to keep the between-sensor collections as close as possible in order to avoid large differences in phenology when comparing sensors. Weather conditions between 2019 and 2020 were similar. May and June 2020 were roughly two degrees warmer and June 2020 received two more inches of rain compared to June 2019. A visual inspection of the imagery did not show significant differences in phenology, however.

2.3. Imagery Pre-Processing and Orthomosaic Generation

Due to the high canopy cover in the study area, it was not possible to set ground control points (GCPs) across the woodlot to improve the positional accuracy of the orthomosaics. The eBee X, however, is real-time kinematic (RTK)-enabled and thus the raw GPS positions for each image could be PPK post-processed. All the raw UAS imagery were pre-processed using the Sensefly Flight Data Manager built into the eMotion 3 software. The Flight Data Manager extracted the geotags for all the images stored in the mission flight logs and then used a post-process kinematic (PPK) technique to correct the positions. A CORS station located approximately 3.85 km from the center of the study area (station ID: NHUN) was used for all PPK processing. Once corrected, the software then geotagged the images with the corrected positions.
Each date of collection was processed in Agisoft Metashape Professional (formally Agisoft Photoscan) [66]. Agisoft utilizes the structure from motion (SfM) and multi-view stereo (MVS) processes to generate a georeferenced orthomosaic, or ortho. Points representing different features within each image are detected and then matched across multiple overlapping images. The matched points, called tie points, are then utilized to estimate the interior and exterior orientation parameters for the camera for each image. The reprojection error for all models ranged between 0.448 and 1.28 px. The original point cloud, or sparse point cloud, from the tie points is densified by matching pixel windows between successive image pairs using the estimated camera orientations [67,68]. A digital surface model (DSM) is generated from the dense point cloud, which is then used to orthorectify the images. The rectified images are then mosaicked together to form the final orthomosaic. Specifically, within the Agisoft software, the Align Photos tool was run in the high accuracy mode with generic preselection, guided image matching, and adaptive camera model fitting turned on. The dense point cloud generation was run in high quality with mild filtering.
While all the missions were flown with the same parameters, the different focal lengths of the two sensors resulted in very different spatial resolutions for the resulting orthomosaics. The coarsest spatial resolution of the Aeria X and Sequoia orthos were 2.7 cm and 11.9 cm, respectively. In order to eliminate spatial resolution as a factor when comparing the performance of the two sensors, all the orthos were exported at a 12 cm spatial resolution from Agisoft. They were then georeferenced to improve the positional agreement. The 27 June 2020 Aeria X orthomosaic was chosen as the base ortho. The remaining orthomosaics were then registered to the base ortho using several well-dispersed structural features across the study site and rectified using an affine transformation and nearest neighbor resampling.

2.4. Reference Data Collection

The dense point cloud from the 27 June 2020 Aeria X imagery was exported and converted into a DSM with a 12 cm spatial resolution to match that of the orthomosaics. The DSM was then normalized using a digital terrain model (DTM) produced from a 2011 leaf-off Lidar collection for coastal New Hampshire and downloaded from the GRANIT LiDAR Distribution site (https://lidar.unh.edu/map/, accessed on 2 July 2021) to produce the canopy height model (CHM). Due to the inability of photogrammetrically produced point clouds to accurately capture the ground, externally produced DTMs, typically from LiDAR, are commonly used to normalize those produced from imagery [34,35,69]. Based on the land-use history of the site, there was no concern about the about the age of the DTM. A 3 × 3 cell Gaussian filter was then applied to the CHM to reduce the noise in the original model [70]. Pixels with a height less than 5 m were considered non-forested and subsequently masked from the CHM.
A local maximum filter was used to generate points representing treetops for the entire study area [29,71]. Kingman farm has a high stand density with highly variable crown widths. To ensure smaller crowns were appropriately captured, a 7 cell, or 84 cm wide, circular window was applied. This window size was chosen based on the smallest measured crown width from a 2017 CFI inventory of the Kingman Farm woodlot. While smaller window sizes will over-segment larger crowns [72], this is preferable to under-segmentation, which could result in the canopies of different species being grouped together, and has been found to improve classification accuracies [73,74].
An initial set of reference trees were selected from the 2017 CFI inventory. For each sampled tree, the distance and azimuth from the plot center to the center of the stem at breast height was recorded in addition to the tree species. This information was used to map the location of each sampled tree stem. Each mapped tree was first carefully inspected to determine whether the tree could visually be seen in the fully leaf-on imagery and CFI trees that were obscured by taller trees were removed. Next, for trees that were leaning, the location of the center of the stem would not match that of the highest point of the crown, so a visual inspection of the UAS imagery in Agisoft was used to select the local maximum for the remaining trees.
Based on the species represented in the chosen CFI trees, 14 were chosen for classification (Table 3). These species were determined to have a high enough occurrence within the study area to ensure that a representative number of reference samples could be gathered. To improve the efficiency of the reference data collection, a random forest (RF) classification [60] was performed, using the chosen CFI trees as training data. Each local maximum was assigned a preliminary classification based on the average spectral information from the 26 June 2020 Sequoia orthomosaic occurring within a 0.5 m buffer around each point. This preliminary classification was used to perform stratified random sampling. Each selected point was then carefully inspected using the high-resolution orthomosaics and adjusted as necessary. Field reconnaissance was carried out for those reference samples that were too difficult to photo interpret. One hundred samples per class (species) were collected per the recommendation of Congalton and Green [75]. These reference samples were then randomly divided into two independent groups, one for training the classification algorithm and the other for validation, with half the samples assigned to each.
A marker-controlled watershed (MCW) segmentation was performed to delineate individual tree crowns. In a traditional watershed segmentation for tree crown delineation, a single banded image, typically representing height, is treated as a topographic surface [52,72]. The values are inverted so that local maximums (i.e., potential treetops) become local minimums and the catchment basins (i.e., crown boundaries) around all the local minima within the image are delineated. MCW segmentation requires an additional input, markers or points representing the local minima of interest. The basins associated with non-marker minima are converted to plateaus within the image and not delineated. The result is a one-to-one relationship between markers and basins, which reduces over-segmentation. In this study, the local maximums representing the tree crowns in the study area were used as the markers and the CHM was used to define the crown boundary.

2.5. Tree Species Classification

A series of mono-temporal (single date) and multi-temporal (multiple dates) classifications were carried out for each sensor using an object-based classification approach, whereby a grouping of pixels (image objects) are classified instead of the individual pixels. An object-based approach performs better than a traditional pixel-based approach when classifying high-spatial resolution imagery since it can better handle the higher intra-class spectral variability that occurs as the spatial resolution increases [53,76,77]. The previously created tree crown segments acted as the image objects for this study.
The RF classifier was employed for all classifications. RF is a robust, non-parametric classification algorithm used often for classification and employed in other multi-temporal species classification studies [6,23,78,79] The per-band average spectral value of the training tree segments was used to train the RF classifier. Each RF model was grown using 500 trees and the square root of the number of spectral bands included in the model as described below. The resulting model was then applied to the independent validation tree segments to assess its accuracy.
Each singular date of imagery was classified alone (i.e. mono-temporal classification). Additionally, a series of multi-temporal image stacks were classified using varying combinations of the single-date orthomosaics for each sensor. Image stacks started with imagery for every combination of two dates. The number of dates included in the stack was then increased incrementally until all dates of imagery were included (i.e., three-date stack, four-date stack, five-date stack). In total, 62 combinations were generated, 31 per sensor (Table 4).

2.6. Accuracy Assessment

The accuracy of all the classifications was assessed using the validation tree segments and an error matrix approach [80]. The ground classification of each validation tree was compared to its respective map classification and the results tallied in a matrix with the columns and the rows of the matrix representing the sample’s ground and map classification, respectively. For each matrix, the overall accuracy (OA) was calculated by dividing the sum of the major diagonal (total agreement) by the total number of samples. The accuracy of the individual classes was determined by calculating the user’s (UA) and producer’s (PA) accuracies [81]. The PA was calculated by dividing the number of correctly classified samples for each class by the total number of samples for that class. The UA was calculated by dividing the number of correctly classified samples for each class by the total number of samples classified as that class. UA and PA were then used to calculate an F-measure (F; Equation (1)) as a way to summarize the UA and PA in a single metric.
F = 2   ( UA PA ) ( UA + PA )
Due to the randomization approach implemented by the RF classifier, the accuracy of no two RF models will be the same. To account for this, 30 RF models were generated for each date combination in Table 4. Each model was validated and the OA, UA, PA, and F calculated. These results were then averaged together to calculate a mean accuracy result for each combination.

2.7. Feature Importance

A feature importance investigation was carried out for both sensors. An RF classifier was trained using the training tree segments and all the bands for all dates of imagery and validated using the independent validation tree segments to establish a baseline accuracy. One at a time, each band included in the image stack was removed, the model retrained and validated, and the difference in overall accuracy taken as the measure of importance for that band.

2.8. Statistical Comparisons

A kappa analysis was conducted to statistically compare the best single-date and multi-date classifications for each sensor. The kappa statistic, KHAT, is another measure of how well the classification agrees with the reference data that does not assume the land cover classes are independent and utilizes the information in the entire error matrix, not just the diagonal [80]. The KHAT statistic for two error matrices can be statistically compared to determine whether there is a significant difference between methodologies [75].
Several KHAT comparisons were conducted. First, within each sensor, the best mono- and multi-temporal classifications were compared to determine not only whether a multi-temporal classification was significantly better than a single-date classification, but also whether there was a significant difference between how many dates were used. Next, between-sensor KHAT comparisons were conducted for each date of imagery to compare the classification performance of the RGB imagery to that of the multispectral imagery.

3. Results

3.1. Within-Sensor General Classification Results

Figure 2 presents the results of the three best performing classifications for the single- and multi-date image stacks based on the OA. The OA for all the classifications performed can be found in Appendix A (Table A1 and Table A2). Overall classification accuracies were highly varied, ranging from 24.8% to 61.1% for the Aeria and 27.0% to 55.5% for the Sequoia. Across the individual date groups, the mono-temporal classifications had the lowest overall accuracies, reaching a maximum OA of 37.3% and 36.2% for the Aeria and Sequoia respectively. Generally, the inclusion of additional dates resulted in the accuracy of all classifications improving. However, there was a distinct leveling off in the OA as the number of dates included in the multi-temporal classification increased, reaching the peak OA for the five-date classification (Aeria) and for the four-date classification (Sequoia).
For the top performing combinations (Figure 2), the mid-spring and late spring imagery were consistently chosen. The best mono-temporal classification for both sensors also occurred at the end of May, for late spring. For the multi-temporal classifications, the best date combinations varied slightly between the sensors, but mid- and late spring imagery were frequently utilized, especially for the two and three-date combinations for which there was 10 combinations for each.

3.2. Mono- Versus Multi-Temporal Classification

The results of the pairwise comparison between the mono- and multi-temporal classifications for both sensors are given in Table 5. For each pairing, the 30 individual classifications were compared and the number of significantly different classifications totaled. Both sensors exhibited the same trend in the number of significantly different classifications. The best two-date multi-temporal classification was always significantly better than the best mono-temporal classification. Between two and two-dates, the number of significantly different classifications decreased considerably. After three dates of imagery, there was no significant difference in the classifications.

3.3. Per-Species Classification Result

The UA, PA, and F for all species and all classifications are presented in Figure 3 and Figure 4 for the Aeria and Sequoia, respectively. The accuracy of eastern hemlock (eh) and white pine (wp), the only coniferous species in this study, were consistently better than that of the deciduous species across all combinations. The F of both were often >70%, peaking at 88% for eastern hemlock (Aeria) and 80% for white pine (Sequoia). White ash (wa), red maple (rm), and American beech (ab) were consistently poorly classified, never achieving Fs greater than 50%. The performance of the remaining species varied with the number of dates included and the specific dates in the combination for each sensor.

3.4. Between-Sensor Classification Results

The best performing Aeria and Sequoia classifications based on OA for each mono- and multi-temporal classification group were statistically compared. For each pairing, the 30 individual classifications were compared and the number of statistically significant results summarized. The results of the comparisons are shown in Figure 5. When compared to the Aeria, the Sequoia consistently under-performed in terms of OA. The smallest difference was seen in the mono-temporal classifications (OA difference of 1.1%) while the greatest occurred with the five-date classification (OA difference of 6.9%). None of the mono-temporal classifications were found to be significantly different. Each of the multi-temporal pairings had some significantly different results, the number of which increased with the number of added dates. Almost all of the five-date comparisons were found to be significantly different.

3.5. Feature Importance

The results of the feature importance analysis are presented in Figure 6. Feature importance here was measured as the decrease in overall accuracy relative to a baseline model (the five-date combination) when that feature or band was removed. Positive values indicate that the model accuracy decreased when the band was removed while negative values indicate that the model accuracy improved. For the Aeria, the blue bands were considerably more important than the other spectral bands. Furthermore, the mid- and late spring imagery, regardless of the spectral band, were also important. The Sequoia had numerous bands indicated as having negative impacts on performance. The mid- and early spring green and red bands were predominately the most important. The red-edge and NIR bands were consistently the least important.

4. Discussion

This study sought to (1) investigate whether a multi-temporal approach improved the accuracy of species-level forest composition mapping with UAS imagery in a highly heterogeneous forest, and in doing so to determine whether there is an optimal phenological window within which to collect imagery; and (2) compare the performance of RGB imagery collected via a consumer-grade DSLR to that of a multispectral camera. A series of mono-temporal and multi-temporal classifications of 14 different species were carried out for both sensors and validated with an independent set of reference samples and error matrices. Kappa comparisons were then conducted between the best performing mono- and multi-temporal classifications within each sensor and then between sensors to determine whether multi-temporal classifications were significantly better than mono-temporal classifications and whether there was any significant difference between the classifications produced by the RGB and multispectral sensors.
While the underlying goal of this study is to inform users on data collection strategies, it is important to note that this study was conducted in a single stand in one point of the globe. The results of this study should be interpreted within the context from which they were derived. Geographic variation in phenology aside, results may vary even with geographically close locations simply due to differences in site, lighting, and composition, most of which are difficult to control.

4.1. Tree Species Classification Accuracy

This study achieved maximum overall accuracy of 61.1% and 55.5% for the Aeria and Sequoia, respectively. These OAs are lower compared to comparable studies that performed similar investigations [6,28,82]. Both Lisein et al. [28] and Michez et al. [6] conducted multi-species level forest mapping in mixed forest stands using both multi-temporal RGB and multispectral UAS imagery. These studies achieved maximum accuracies of 91.2% (based on RF out-of-bag errors) and 84.1%, respectively. It should be noted that these studies, while similar, varied in two important ways. First, both studies only included five classes. Some were species while others were groupings representing specific genera (e.g., birches). This study included 14 individual species of trees. The greater number of species employed here led to greater spectral confusion, especially for species exhibiting similar phenology across the time period investigated [6]. This study chose to represent the diversity of the study site “as is”, rather than choosing a subset of species exhibiting the best separation, thus expanding the generalization of these results to similar conditions [2,23].
Second, these studies employed additional derivative layers that were not utilized here, mainly spectral indices and textural metrics. Additional derivative information, especially texture, has been found to significantly improve the accuracy of forest classification in a number of settings [78,82,83] and in other vegetation mapping studies as well [84,85]. This study establishes a baseline for the performance of these two sensors based on spectral properties alone. Given the resolution these UAS sensors are capable of achieving, a great deal of information on crown texture can be extracted. The benefits of textural metrics for mapping stands such as the one investigated here are an interesting topic in need of additional research.

4.2. Mono- versus Multi-Temporal Classification

Both sensors employed here demonstrated a continuous increase in the overall classification accuracy as the number of dates included in the multi-temporal classification increased (Figure 2). This result falls in line with many other studies that have investigated the performance of multi-temporal classifications both with UAS [6,28,82] and non-UAS imagery [43,46,86,87]. Of interest in this study was the significance of the additional benefit incurred by adding more dates. The highest accuracy was achieved when using all five dates of imagery for the Aeria and four dates for the Sequoia. From a cost–benefit perspective, one would look to achieve the highest accuracy possible with the least number of collections. While the OA did increase with the number of dates utilized, the rate at which it increased for both sensors leveled off, indicating a diminishing return. The results of the mono- versus multi-temporal kappa comparisons support this conclusion (Table 5). The two-date classification for both sensors was significantly better than the mono-temporal classification for all iterations. There was only a minor benefit when a third date was included and, beyond three dates, there was no significant benefit. Weil et al. [23] similarly saw little improvement in classification accuracy after three-dates of optimal near-surface imagery using the RF classifier. These results not only reinforce the benefits of multi-temporal classifications, but also suggest that there would be no need to collect more than three dates of optimally timed imagery.

4.3. Timing of Aerial Collection

Based on the date combinations of the best performing mono- and multi-temporal classifications, the mid- and late spring imagery play an important role in trees species classifications. The best mono-temporal collection date was found to be towards the end of May for both sensors. Similar studies investigating optimal phenological timing have also found the middle and end of spring to be important [23,28]. This runs counter to what one would expect, which is that the accuracy would be maximized at the point when the trees express their greatest phenological differences, either early spring or autumn [28]. Indeed, other studies have found autumn to be the optimal mono-temporal window for species mapping [23,46,86].
Lisein [28] suggested that this period presents a balance between inter- and intra-species spectral variation, not only improving the separability between species but also the homogeneity within species. After this period, individual phenology starts to express the effects of differing microclimate, age, and even health [88,89,90]. It is at this point too that the spectral response of trees below the upper canopy are suppressed (full to almost full leaf-cover above), further improving the variability. This suggests that more focus should be placed on the intra-species variation when collecting phenology data for species classification.
The results of the multi-temporal classifications still demonstrate that including periods with high inter-species variation is important for achieving high classification accuracies. The best performing two and three-date classifications included those combinations with the mid-spring imagery and the late spring imagery. Many species experienced an increase in their individual accuracies for the date combinations containing both those dates (Figure 3 and Figure 4). Visually, the mid-spring imagery collected here exhibited the greatest difference between species. Unfortunately, due to equipment difficulties, the full phenological profile of the study site was not captured. Based on the results of the previously mentioned studies, the inclusion of autumn imagery along with the mid- and late spring imagery could have significantly increased the accuracy of the three-date classifications, perhaps leading to greater significance when statistically compared to the optimal two-date classification.
While this study focused primarily on a global classification result, it is still important to investigate the accuracy of the individual species. There was a substantial difference in the performance for different species and combinations (Figure 3 and Figure 4). Most notably, the two coniferous species were consistently well-classified compared to the deciduous species. Eastern hemlock exhibited accuracies >70% within only a single date of imagery. White pine performed better once there were two dates and then stabilized. White ash, American beech, and red maple did consistently poorly, showing only a minor improvement with additional dates. Within-species variation, as noted, could have a significant impact on an individual species’ performance. Red maple naturally exhibited great variability during the important mid-spring time period. Some trees were just starting to show the early red flourescence while others had almost fully leafed out; expressing the influence of the wide variety of conditions red maple can tolerate [63,90]. American beech in the study area was much further ahead phenologically than most other species, almost completely leafed out by mid-spring, but, at this time, many of the beech trees in the stand are suffering from the effects of beech bark disease. The range of infestation is wide, with some beech trees only recently being infected to others nearing mortality. This range would have caused large variability in the spectral response, not just because of the change in vegetation health, but also because of the change in the structure of the canopy as well [6]. Additionally, the time series collected here may not have been dense enough to capture the specific periods within which a species becomes distinct. For example, white ash had few if any leaves by mid-spring but was fully leafed out by late spring. An important window may have been missed. Far more spectrally unique species, for example the aspen trees, black oak, and black birch, performed well, even with just a few dates of imagery.

4.4. RGB versus Multispectral Sensors for Tree Species Classification

The multispectral sensor employed here was found to underperform compared to the consumer-grade RGB sensor. The statistical comparison between the two sensors (Figure 5) suggests that for a mono-temporal classification the RGB sensor and the multispectral were not different. However, the RGB sensor became significantly better with each additional date added to the classification. Both Lisein et al. [28] and Michez et al. [6] carried out comparisons between multi-temporal RGB imagery and color infrared (CIR) imagery (green, red, and near infrared sensitivity only) for the purpose of forest species classification and found that the RGB outperformed the CIR. Both studies suggested that the poor performance from the CIR was due to the redundant sensitivity to NIR across the three bands after modifying their cameras. Nijland et al. [42] concluded the same when comparing modified (i.e., NIR blocking filter removed) and unmodified RGB cameras for monitoring plant health and phenology. This study sought to overcome the redundant sensitivity problem by utilizing a multispectral sensor designed specifically for vegetation mapping and monitoring. Not only was each band specifically designed to avoid spectral overlap, but they also included an additional band in the red-edge region of the EMS, which has been found to benefit the discrimination between species [91,92,93]. The results of the feature importance testing (Figure 6) suggest that the blue band, which is lacking in the Parrot Sequoia, is of high importance for mapping tree species. Key et al., [86] also found the blue band to be highly significant for species classification due to its sensitivity to chlorophyll and insensitivity to shadowing in canopies, a significant problem in many types of classification studies [2,86,94]. The most important bands for the Sequoia also happened to be in the visible range (red and green) while the red-edge and the NIR bands were found to be the least important bands. The visible bands should thus be considered highly important when conducting future classification studies [31].
This result has important implications in that users of the technology may not necessarily have to buy a more expensive multispectral sensor when in fact they could achieve better results with the RGB sensor alone. However, studies comparing the consumer-grade RGB sensor to multispectral sensors containing blue bands, such as the Micasense RedEdge-MX (https://micasense.com) or the DJI P4 Multispectral (https://www.dji.com), should be carried out. Hyperspectral sensors with hundreds of bands covering visible to invisible wavelengths exist and could very well improve the accuracy of species classifications [29,31,95], but they will most likely remain cost prohibitive for some time.

5. Conclusions

With greater focus being placed on precision forestry, there is a growing need to improve our ability to generate species-level maps of forest communities. UASs, capable of achieving very high spatial and temporal resolutions, have recently become an affordable means of generating these species-level maps. Hardware limitations, mainly weight, have restricted the type of sensors that can be flown. Lower spectral resolution, consumer-grade RGB cameras are frequently being flown due to their lower weight and affordability, but they are not typically optimal for classifying vegetation down to the species level. While lightweight multispectral cameras exist, the costs of these sensors are potentially prohibitive. This study investigated whether taking advantage of UASs’ higher temporal resolution to track tree phenology could help to improve the species-level classification accuracy with both RGB and multispectral imagery. Additionally, the optimal phenological timing for UAS data collection was investigated and a comparison between the performances of an RGB sensor and that of a multispectral sensor carried out.
The results show that there was a considerable and statistically significant increase in accuracy when utilizing a multi-temporal classification compared to a mono-temporal classification. While accuracy increased with additional dates of imagery, there was no significant increase in accuracy beyond three dates of optimally timed imagery. Based on the accuracy of the best performing date combinations, mid- and late spring imagery were found to be crucial points in the growing to capture, most likely due to the high inter-species spectral heterogeneity and intra-species homogeneity captured at these moments.
The multispectral sensor employed in this study consistently underperformed compared to the RGB sensor. The RGB sensor was found to perform the same as the multispectral sensor when employing a mono-temporal classification, but became statistically better as the number of dates of imagery increased. An analysis of feature importance suggests that the visual bands are important for species classification at this resolution, especially the blue band, and less significance can be placed on the non-visual bands.
This study was conducted in a highly heterogeneous forest; 14 separate species were classified. High-inter species spectral variability was to be expected, especially if they exhibited similar phenology or were naturally highly variable to due growing conditions or health. Future research is needed to investigate the benefits of derivative layers, such as spectral indices and texture, on overall accuracy. Additionally, expansion of the UAS collection into the late summer/autumn months may present interesting results. Finally, further research is necessary on comparing consumer-grade RGB sensors to multispectral sensors that employ all the visual bands, if not more.

Author Contributions

H.G. conceived and designed the study, conducted the data collection and analysis, and wrote the paper. R.G.C. contributed to the development of the overall research design and analysis and aided in the writing of the paper. Both authors have read and agreed to the published version of the manuscript.

Funding

Partial funding was provided by the New Hampshire Agricultural Experiment Station. This is Scientific Contribution Number: #2899. This work was supported by the USDA National Institute of Food and Agriculture McIntire Stennis Project #NH00095-M (Accession #1015520).

Data Availability Statement

Please contact Russell Congalton ([email protected]) for access to data.

Acknowledgments

The authors would like to acknowledge Jacob Dearborn for his assistance with reference data collection.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Accuracy for all Aeria classifications. Overall accuracy (OA) is reported here as the average OA of the 30 classification iterations performed for each combination. The standard deviations (STDs) are given. The results are sorted by the number of dates included in the combination and the average OA.
Table A1. Accuracy for all Aeria classifications. Overall accuracy (OA) is reported here as the average OA of the 30 classification iterations performed for each combination. The standard deviations (STDs) are given. The results are sorted by the number of dates included in the combination and the average OA.
IndexDate CombinationAverage OASTD
One Date14-26-200.2480.004
56-27-200.3130.005
46-12-190.3320.005
25-16-190.3460.004
35-30-190.3730.005
Two Dates84-26-20 + 6-12-190.3700.005
156-12-19 + 6-27-200.3910.006
94-26-20 + 6-27-200.4040.006
74-26-20 + 5-30-190.4290.005
64-26-20 + 5-16-190.4370.004
135-30-19 + 6-12-190.4400.006
145-30-19 + 6-27-200.4660.005
115-16-19 + 6-12-190.4790.006
125-16-19 + 6-27-200.5110.005
105-16-19 + 5-30-190.5400.005
Three Dates214-26-20 + 6-12-19 + 6-27-200.4300.005
194-26-20 + 5-30-19 + 6-12-190.4790.006
255-30-19 + 6-12-19 + 6-27-200.5010.006
204-26-20 + 5-30-19 + 6-27-200.5070.006
174-26-20 + 5-16-19 + 6-12-190.5240.005
184-26-20 + 5-16-19 + 6-27-200.5340.005
245-16-19 + 6-12-19 + 6-27-200.5500.004
164-26-20 + 5-16-19 + 5-30-190.5550.006
225-16-19 + 5-30-19 + 6-12-190.5670.005
235-16-19 + 5-30-19 + 6-27-200.5880.005
Four Dates294-26-20 + 5-30-19 + 6-12-19 + 6-27-200.5130.005
284-26-20 + 5-16-19 + 6-12-19 + 6-27-200.5540.004
264-26-20 + 5-16-19 + 5-30-19 + 6-12-190.5980.006
305-16-19 + 5-30-19 + 6-12-19 + 6-27-200.6040.006
274-26-20 + 5-16-19 + 5-30-19 + 6-27-200.6090.006
All314-26-20 + 5-16-19 + 5-30-19 + 6-12-19 + 6-27-200.6110.005
Table A2. Accuracy for all Sequoia classifications. Overall accuracy (OA) is reported here as the average OA of the 30 classification iterations performed for each combination. The standard deviations (STDs) are given. The results are sorted by the number of dates included in the combination and the average OA.
Table A2. Accuracy for all Sequoia classifications. Overall accuracy (OA) is reported here as the average OA of the 30 classification iterations performed for each combination. The standard deviations (STDs) are given. The results are sorted by the number of dates included in the combination and the average OA.
IndexDate CombinationAverage OASTD
One Date25-15-200.2700.005
14-28-200.2720.005
46-10-200.3150.006
56-26-200.3330.006
35-29-200.3620.006
Two Dates64-28-20 + 5-15-200.3620.004
84-28-20 + 6-10-200.3750.004
94-28-20 + 6-26-200.3930.005
156-10-20 + 6-26-200.4050.005
115-15-20 + 6-10-200.4310.006
74-28-20 + 5-29-200.4500.005
135-29-20 + 6-10-200.4550.004
125-15-20 + 6-26-200.4570.006
105-15-20 + 5-29-200.4890.005
145-29-20 + 6-26-200.4950.007
Three Dates214-28-20 + 6-10-20 + 6-26-200.4370.005
174-28-20 + 5-15-20 + 6-10-200.4520.007
184-28-20 + 5-15-20 + 6-26-200.4620.006
194-28-20 + 5-29-20 + 6-10-200.4700.007
245-15-20 + 6-10-20 + 6-26-200.4790.005
255-29-20 + 6-10-20 + 6-26-200.4940.006
225-15-20 + 5-29-20 + 6-10-200.5020.005
204-28-20 + 5-29-20 + 6-26-200.5130.006
164-28-20 + 5-15-20 + 5-29-200.5150.006
235-15-20 + 5-29-20 + 6-26-200.5390.005
Four Dates284-28-20 + 5-15-20 + 6-10-20 + 6-26-200.4780.005
294-28-20 + 5-29-20 + 6-10-20 + 6-26-200.5230.008
305-15-20 + 5-29-20 + 6-10-20 + 6-26-200.5280.005
264-28-20 + 5-15-20 + 5-29-20 + 6-10-200.5280.006
274-28-20 + 5-15-20 + 5-29-20 + 6-26-200.5550.006
All314-28-20 + 5-15-20 + 5-29-20 + 6-10-20 + 6-26-200.5420.007

References

  1. Brosofske, K.D.; Froese, R.E.; Falkowski, M.J.; Banskota, A. A Review of Methods for Mapping and Prediction of Inventory Attributes for Operational Forest Management. For. Sci. 2014, 60, 733–756. [Google Scholar] [CrossRef]
  2. Fassnacht, F.; Latifi, H.; Stereńczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.; Straub, C.; Ghosh, A. Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
  3. Turner, W.; Spector, S.; Gardiner, N.; Fladeland, M.; Sterling, E.; Steininger, M. Remote sensing for biodiversity science and conservation. Trends Ecol. Evol. 2003, 18, 306–314. [Google Scholar] [CrossRef]
  4. Saarinen, N.; Vastaranta, M.; Näsi, R.; Rosnell, T.; Hakala, T.; Honkavaara, E.; Wulder, M.A.; Luoma, V.; Tommaselli, A.M.G.; Imai, N.N.; et al. Assessing Biodiversity in Boreal Forests with UAV-Based Photogrammetric Point Clouds and Hyperspectral Imaging. Remote Sens. 2018, 10, 338. [Google Scholar] [CrossRef] [Green Version]
  5. Klouček, T.; Komárek, J.; Surový, P.; Hrach, K.; Janata, P.; Vašíček, B. The Use of UAV Mounted Sensors for Precise Detection of Bark Beetle Infestation. Remote Sens. 2019, 11, 1561. [Google Scholar] [CrossRef] [Green Version]
  6. Michez, A.; Piégay, H.; Lisein, J.; Claessens, H.; Lejeune, P. Classification of riparian forest species and health condition using multi-temporal and hyperspatial imagery from unmanned aerial system. Environ. Monit. Assess. 2016, 188, 1–19. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Goodbody, T.R.; Coops, N.C.; Marshall, P.L.; Tompalski, P.; Crawford, P. Unmanned aerial systems for precision forest inventory purposes: A review and case study. For. Chron. 2017, 93, 71–81. [Google Scholar] [CrossRef] [Green Version]
  8. Gülci, S. The determination of some stand parameters using SfM-based spatial 3D point cloud in forestry studies: An analysis of data production in pure coniferous young forest stands. Environ. Monit. Assess. 2019, 191, 495. [Google Scholar] [CrossRef]
  9. Alonzo, M.; Andersen, H.-E.; Morton, D.C.; Cook, B.D. Quantifying Boreal Forest Structure and Composition Using UAV Structure from Motion. Forests 2018, 9, 119. [Google Scholar] [CrossRef] [Green Version]
  10. Franklin, S.E.; Wulder, M.A. Remote sensing methods in medium spatial resolution satellite data land cover classification of large areas. Prog. Phys. Geogr. Earth Environ. 2002, 26, 173–205. [Google Scholar] [CrossRef]
  11. Wulder, M.A.; Hall, R.J.; Coops, N.; Franklin, S. High Spatial Resolution Remotely Sensed Data for Ecosystem Characterization. BioScience 2004, 54, 511–521. [Google Scholar] [CrossRef] [Green Version]
  12. Anderson, K.; Gaston, K.J. Lightweight unmanned aerial vehicles will revolutionize spatial ecology. Front. Ecol. Environ. 2013, 11, 138–146. [Google Scholar] [CrossRef] [Green Version]
  13. Cruzan, M.B.; Weinstein, B.G.; Grasty, M.R.; Kohrn, B.F.; Hendrickson, E.C.; Arredondo, T.M.; Thompson, P.G. Small Unmanned Aerial Vehicles (Micro-Uavs, Drones) in Plant Ecology. Appl. Plant Sci. 2016, 4, 1–11. [Google Scholar] [CrossRef]
  14. Whitehead, K.; Hugenholtz, C.H. Remote sensing of the environment with small unmanned aircraft systems (UASs), part 1: A review of progress and challenges. J. Unmanned Veh. Syst. 2014, 2, 69–85. [Google Scholar] [CrossRef]
  15. Getzin, S.; Wiegand, K.; Schöning, I. Assessing biodiversity in forests using very high-resolution images and unmanned aerial vehicles. Methods Ecol. Evol. 2012, 3, 397–404. [Google Scholar] [CrossRef]
  16. Baena, S.; Moat, J.; Whaley, O.; Boyd, D. Identifying species from the air: UAVs and the very high resolution challenge for plant conservation. PLoS ONE 2017, 12, e0188714. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Dvořák, P.; Müllerová, J.; Bartaloš, T.; Brůna, J. Unmanned Aerial Vehicles for Alien Plant Species Detection and Monitoring. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-1/W4, 83–90. [Google Scholar] [CrossRef] [Green Version]
  18. Müllerová, J.; Brůna, J.; Bartaloš, T.; Dvořák, P.; Vitkova, M.; Pyšek, P. Timing Is Important: Unmanned Aircraft vs. Satellite Imagery in Plant Invasion Monitoring. Front. Plant Sci. 2017, 8, 887. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Wijesingha, J.; Astor, T.; Schulze-Brüninghoff, D.; Wachendorf, M. Mapping Invasive Lupinus polyphyllus Lindl. in Semi-natural Grasslands Using Object-Based Image Analysis of UAV-borne Images. PFG J. Photogramm. Remote Sens. Geoinf. Sci. 2020, 88, 391–406. [Google Scholar] [CrossRef]
  20. Brooks, C.; Weinstein, C.; Poley, A.; Grimm, A.; Marion, N.; Bourgeau-Chavez, L.; Hansen, D.; Kowalski, K. Using Uncrewed Aerial Vehicles for Identifying the Extent of Invasive Phragmites australis in Treatment Areas Enrolled in an Adaptive Management Program. Remote Sens. 2021, 13, 1895. [Google Scholar] [CrossRef]
  21. Laliberte, A.S.; Herrick, J.E.; Rango, A.; Winters, C. Acquisition, Orthorectification, and Object-based Classification of Unmanned Aerial Vehicle (UAV) Imagery for Rangeland Monitoring. Photogramm. Eng. Remote Sens. 2010, 76, 661–672. [Google Scholar] [CrossRef]
  22. Lu, B.; He, Y. Species classification using Unmanned Aerial Vehicle (UAV)-acquired high spatial resolution imagery in a heterogeneous grassland. ISPRS J. Photogramm. Remote Sens. 2017, 128, 73–85. [Google Scholar] [CrossRef]
  23. Weil, G.; Lensky, I.M.; Resheff, Y.S.; Levin, N. Optimizing the Timing of Unmanned Aerial Vehicle Image Acquisition for Applied Mapping of Woody Vegetation Species Using Feature Selection. Remote Sens. 2017, 9, 1130. [Google Scholar] [CrossRef] [Green Version]
  24. Komárek, J.; Klouček, T.; Prošek, J. The potential of Unmanned Aerial Systems: A tool towards precision classification of hard-to-distinguish vegetation types? Int. J. Appl. Earth Obs. Geoinf. 2018, 71, 9–19. [Google Scholar] [CrossRef]
  25. Leduc, M.-B.; Knudby, A.J. Mapping Wild Leek through the Forest Canopy Using a UAV. Remote Sens. 2018, 10, 70. [Google Scholar] [CrossRef] [Green Version]
  26. Knoth, C.; Klein, B.; Prinz, T.; Kleinebecker, T. Unmanned aerial vehicles as innovative remote sensing platforms for high-resolution infrared imagery to support restoration monitoring in cut-over bogs. Appl. Veg. Sci. 2013, 16, 509–517. [Google Scholar] [CrossRef]
  27. Durgan, S.D.; Zhang, C.; Duecaster, A.; Fourney, F.; Su, H. Unmanned Aircraft System Photogrammetry for Mapping Diverse Vegetation Species in a Heterogeneous Coastal Wetland. Wetland 2020, 40, 2621–2633. [Google Scholar] [CrossRef]
  28. Lisein, J.; Michez, A.; Claessens, H.; Lejeune, P. Discrimination of Deciduous Tree Species from Time Series of Unmanned Aerial System Imagery. PLoS ONE 2015, 10, e0141006. [Google Scholar] [CrossRef]
  29. Nevalainen, O.; Honkavaara, E.; Tuominen, S.; Viljanen, N.; Hakala, T.; Yu, X.; Hyyppä, J.; Saari, H.; Pölönen, I.; Imai, N.N.; et al. Individual Tree Detection and Classification with UAV-Based Photogrammetric Point Clouds and Hyperspectral Imaging. Remote Sens. 2017, 9, 185. [Google Scholar] [CrossRef] [Green Version]
  30. Franklin, S.E.; Ahmed, O.S. Deciduous tree species classification using object-based analysis and machine learning with unmanned aerial vehicle multispectral data. Int. J. Remote Sens. 2018, 39, 5236–5245. [Google Scholar] [CrossRef]
  31. Miyoshi, G.T.; Imai, N.N.; Tommaselli, A.M.G.; De Moraes, M.V.A.; Honkavaara, E. Evaluation of Hyperspectral Multitemporal Information to Improve Tree Species Identification in the Highly Diverse Atlantic Forest. Remote Sens. 2020, 12, 244. [Google Scholar] [CrossRef] [Green Version]
  32. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef] [Green Version]
  33. Manfreda, S.; McCabe, M.F.; Miller, P.E.; Lucas, R.; Pajuelo Madrigal, V.; Mallinis, G.; Ben-Dor, E.; Helman, D.; Estes, L.; Ciraolo, G.; et al. On the Use of Unmanned Aerial Systems for Environmental Monitoring. Remote Sens. 2018, 10, 641. [Google Scholar] [CrossRef] [Green Version]
  34. Niethammer, U.; James, M.; Rothmund, S.; Travelletti, J.; Joswig, M. UAV-based remote sensing of the Super-Sauze landslide: Evaluation and results. Eng. Geol. 2012, 128, 2–11. [Google Scholar] [CrossRef]
  35. Hugenholtz, C.H.; Whitehead, K.; Brown, O.W.; Barchyn, T.E.; Moorman, B.; LeClair, A.; Riddell, K.; Hamilton, T. Geomorphological mapping with a small unmanned aircraft system (sUAS): Feature detection and accuracy assessment of a photogrammetrically-derived digital terrain model. Geomorphology 2013, 194, 16–24. [Google Scholar] [CrossRef] [Green Version]
  36. Mafanya, M.; Tsele, P.; Botai, J.; Manyama, P.; Swart, B.; Monate, T. Evaluating pixel and object based image classification techniques for mapping plant invasions from UAV derived aerial imagery: Harrisia pomanensis as a case study. ISPRS J. Photogramm. Remote Sens. 2017, 129, 1–11. [Google Scholar] [CrossRef] [Green Version]
  37. Pádua, L.; Hruška, J.; Bessa, J.; Adão, T.; Martins, L.M.; Gonçalves, J.A.; Peres, E.; Sousa, A.M.R.; Castro, J.P.; Sousa, J.J. Multi-Temporal Analysis of Forestry and Coastal Environments Using UASs. Remote Sens. 2017, 10, 24. [Google Scholar] [CrossRef] [Green Version]
  38. Jensen, J.R. Introductory Digital Image Processing: A Remote Sensing Perspective, 4th ed.; Prentice-Hall Inc.: Upper Saddle River, NJ, USA, 2015; ISBN 0132058405. [Google Scholar]
  39. Hernandez-Santin, L.; Rudge, M.L.; Bartolo, R.E.; Erskine, P.D. Identifying Species and Monitoring Understorey from UAS-Derived Data: A Literature Review and Future Directions. Drones 2019, 3, 9. [Google Scholar] [CrossRef] [Green Version]
  40. Hunt, J.E.R.; Hively, W.D.; Fujikawa, S.J.; Linden, D.S.; Daughtry, C.S.T.; McCarty, G.W. Acquisition of NIR-Green-Blue Digital Photographs from Unmanned Aircraft for Crop Monitoring. Remote Sens. 2010, 2, 290–305. [Google Scholar] [CrossRef] [Green Version]
  41. Müllerová, J.; Brůna, J.; Dvořák, P.; Bartaloš, T.; Vítková, M. Does the Data Resolution/Origin Matter? Satellite, Airborne and Uav Imagery to Tackle Plant Invasions. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B7, 903–908. [Google Scholar] [CrossRef] [Green Version]
  42. Nijland, W.; De Jong, R.; De Jong, S.; Wulder, M.A.; Bater, C.W.; Coops, N. Monitoring plant condition and phenology using infrared sensitive consumer grade digital cameras. Agric. For. Meteorol. 2014, 184, 98–106. [Google Scholar] [CrossRef] [Green Version]
  43. MacLean, M.G.; Congalton, R.G. Applicability of Multi-date Land Cover Mapping using Landsat-5 TM Imagery in the Northeastern US. Photogramm. Eng. Remote Sens. 2013, 79, 359–368. [Google Scholar] [CrossRef]
  44. Mickelson, J.G.; Civco, D.L.; Silander, J. A Delineating Forest Canopy Species in the Northeastern United States Using Multi-Temporal TM Imagery. Photogramm. Eng. Remote Sens. 1998, 64, 891–904. [Google Scholar]
  45. Justice, D.; Deely, A.K.; Rubin, F. New Hampshire Land Cover Assessment: Final Report; NH GRANIT: Durham, NH, USA, 2002; Available online: https://granit.unh.edu/data/search?dset=nhlc01/nh (accessed on 2 July 2021).
  46. Hill, R.A.; Wilson, A.; George, M.; Hinsley, S. Mapping tree species in temperate deciduous woodland using time-series multi-spectral data. Appl. Veg. Sci. 2010, 13, 86–99. [Google Scholar] [CrossRef]
  47. Immitzer, M.; Atzberger, C.; Koukal, T. Tree Species Classification with Random Forest Using Very High Spatial Resolution 8-Band WorldView-2 Satellite Data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef] [Green Version]
  48. Li, D.; Ke, Y.; Gong, H.; Li, X. Object-Based Urban Tree Species Classification Using Bi-Temporal WorldView-2 and WorldView-3 Images. Remote Sens. 2015, 7, 16917–16937. [Google Scholar] [CrossRef] [Green Version]
  49. Lucieer, A.; De Jong, S.M.; Turner, D. Mapping landslide displacements using Structure from Motion (SfM) and image correlation of multi-temporal UAV photography. Prog. Phys. Geogr. Earth Environ. 2014, 38, 97–116. [Google Scholar] [CrossRef]
  50. Du, M.; Noguchi, N. Monitoring of Wheat Growth Status and Mapping of Wheat Yield’s within-Field Spatial Variations Using Color Images Acquired from UAV-camera System. Remote Sens. 2017, 9, 289. [Google Scholar] [CrossRef] [Green Version]
  51. Kohv, M.; Sepp, E.; Vammus, L. Assessing multitemporal water-level changes with uav-based photogrammetry. Photogramm. Rec. 2017, 32, 424–442. [Google Scholar] [CrossRef] [Green Version]
  52. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
  53. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Feitosa, R.Q.; van der Meer, F.; van der Werff, H.; van Coillie, F.; et al. Geographic Object-Based Image Analysis—Towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Hay, G.J.; Castilla, G. Geographic object-based image analysis (GEOBIA): A new name for a new discipline. In Object-Based Image Analysis. Lecture Notes in Geoinformation and Cartography; Blaschke, T., Lang, S., Hay, G., Eds.; Springer: Berlin, Germany, 2008; pp. 75–89. [Google Scholar]
  55. Lang, S.; Baraldi, A.; Tiede, D.; Hay, G.; Blaschke, T. Towards a (GE)OBIA 2.0 Manifesto—Achievements and Open Challenges in Information & Knowledge Extraction from Big EARTH Data. In Proceedings of the GEOBIA 2018—From pixels to ecosystems and global sustainability, Montpellier, France, 18–22 June 2018; pp. 18–22. [Google Scholar]
  56. Bhuiyan, A.E.; Witharana, C.; Liljedahl, A.K.; Jones, B.M.; Daanen, R.; Epstein, H.E.; Kent, K.; Griffin, C.G.; Agnew, A. Understanding the Effects of Optimal Combination of Spectral Bands on Deep Learning Model Predictions: A Case Study Based on Permafrost Tundra Landform Mapping Using High Resolution Multispectral Satellite Imagery. J. Imaging 2020, 6, 97. [Google Scholar] [CrossRef]
  57. Bhuiyan, A.E.; Witharana, C.; Liljedahl, A.K. Use of Very High Spatial Resolution Commercial Satellite Imagery and Deep Learning to Automatically Map Ice-Wedge Polygons across Tundra Vegetation Types. J. Imaging 2020, 6, 137. [Google Scholar] [CrossRef]
  58. Cai, Y.; Huang, H.; Wang, K.; Zhang, C.; Fan, L.; Guo, F. Selecting Optimal Combination of Data Channels for Semantic Segmentation in City Information Modelling (CIM). Remote Sens. 2021, 13, 1367. [Google Scholar] [CrossRef]
  59. Kattenborn, T.; Eichel, J.; Wiser, S.; Burrows, L.; Fassnacht, F.E.; Schmidtlein, S. Convolutional Neural Networks accurately predict cover fractions of plant species and communities in Unmanned Aerial Vehicle imagery. Remote Sens. Ecol. Conserv. 2020, 6, 472–486. [Google Scholar] [CrossRef] [Green Version]
  60. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  61. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  62. Westveld, M. Natural forest vegetation zones of New England. J. For. 1956, 54, 332–338. [Google Scholar]
  63. Klosterman, S.; Richardson, A.D. Observing Spring and Fall Phenology in a Deciduous Forest with Aerial Drone Imagery. Sensors 2017, 17, 2852. [Google Scholar] [CrossRef] [Green Version]
  64. SenseFly. eMotion User Manual Revision 3.1; SensFly SA: Cheseaux-sur-Lausanne, Switzerland, 2020. [Google Scholar]
  65. Fraser, B.T.; Congalton, R.G. Issues in Unmanned Aerial Systems (UAS) Data Collection of Complex Forest Environments. Remote Sens. 2018, 10, 908. [Google Scholar] [CrossRef] [Green Version]
  66. Agisoft. Agisoft Metashape User Manual Professional Edition, Verision 1.6; Agisoft LLC: St. Petersburg, Russia, 2020. [Google Scholar]
  67. Dandois, J.P.; Ellis, E.C. High spatial resolution three-dimensional mapping of vegetation spectral dynamics using computer vision. Remote Sens. Environ. 2013, 136, 259–276. [Google Scholar] [CrossRef] [Green Version]
  68. Lisein, J.; Pierrot-Deseilligny, M.; Bonnet, S.; Lejeune, P. A Photogrammetric Workflow for the Creation of a Forest Canopy Height Model from Small Unmanned Aerial System Imagery. Forests 2013, 4, 922–944. [Google Scholar] [CrossRef] [Green Version]
  69. Dandois, J.P.; Olano, M.; Ellis, E.C. Optimal Altitude, Overlap, and Weather Conditions for Computer Vision UAV Estimates of Forest Structure. Remote Sens. 2015, 7, 13895–13920. [Google Scholar] [CrossRef] [Green Version]
  70. Khosravipour, A.; Skidmore, A.; Isenburg, M.; Wang, T.; Hussin, Y.A. Generating Pit-free Canopy Height Models from Airborne Lidar. Photogramm. Eng. Remote Sens. 2014, 80, 863–872. [Google Scholar] [CrossRef]
  71. Hyyppä, J.; Yu, X.; Hyyppä, H.; Vastaranta, M.; Holopainen, M.; Kukko, A.; Kaartinen, H.; Jaakkola, A.; Vaaja, M.; Koskinen, J.; et al. Advances in Forest Inventory Using Airborne Laser Scanning. Remote Sens. 2012, 4, 1190–1207. [Google Scholar] [CrossRef] [Green Version]
  72. Ke, Y.; Quackenbush, L.J. A review of methods for automatic individual tree-crown detection and delineation from passive remote sensing. Int. J. Remote Sens. 2011, 32, 4725–4747. [Google Scholar] [CrossRef]
  73. Gao, Y.; Mas, J.F.; Kerle, N.; Pacheco, J.A.N. Optimal region growing segmentation and its effect on classification accuracy. Int. J. Remote Sens. 2011, 32, 3747–3763. [Google Scholar] [CrossRef]
  74. Belgiu, M.; Drǎguţ, L. Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery. ISPRS J. Photogramm. Remote Sens. 2014, 96, 67–75. [Google Scholar] [CrossRef] [Green Version]
  75. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, 3rd ed.; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  76. Conchedda, G.; Durieux, L.; Mayaux, P. An object-based method for mapping and change analysis in mangrove ecosystems. ISPRS J. Photogramm. Remote Sens. 2008, 63, 578–589. [Google Scholar] [CrossRef]
  77. Johansen, K.; Phinn, S.; Witte, C. Mapping of riparian zone attributes using discrete return LiDAR, QuickBird and SPOT-5 imagery: Assessing accuracy and costs. Remote Sens. Environ. 2010, 114, 2679–2691. [Google Scholar] [CrossRef]
  78. Rodriguez-Galiano, V.F.; Olmo, M.C.; Abarca-Hernandez, F.; Atkinson, P.; Jeganathan, C. Random Forest classification of Mediterranean land cover using multi-seasonal imagery and multi-seasonal texture. Remote Sens. Environ. 2012, 121, 93–107. [Google Scholar] [CrossRef]
  79. Goodbody, T.R.; Coops, N.C.; Hermosilla, T.; Tompalski, P.; Crawford, P. Assessing the status of forest regeneration using digital aerial photogrammetry and unmanned aerial systems. Int. J. Remote Sens. 2017, 39, 5246–5264. [Google Scholar] [CrossRef]
  80. Congalton, R.G.; Oderwald, R.G.; Mead, R.A. Assessing Landsat classification accuracy using discrete multivariate analysis statistical techniques. Photogramm. Eng. Remote Sens. 1983, 49, 1671–1678. [Google Scholar]
  81. Story, M.; Congalton, R.G. Accuracy Assessment: A User ’ s Perspective. Photogramm. Eng. Remote Sens. 1986, 52, 397–399. [Google Scholar]
  82. Gini, R.; Sona, G.; Ronchetti, G.; Passoni, D.; Pinto, L. Improving Tree Species Classification Using UAS Multispectral Images and Texture Measures. ISPRS Int. J. Geoinf. 2018, 7, 315. [Google Scholar] [CrossRef] [Green Version]
  83. Ferreira, M.P.; Wagner, F.H.; Aragão, L.E.; Shimabukuro, Y.E.; Filho, C.R.D.S. Tree species classification in tropical forests using visible to shortwave infrared WorldView-3 images and texture analysis. ISPRS J. Photogramm. Remote Sens. 2019, 149, 119–131. [Google Scholar] [CrossRef]
  84. Laliberte, A.S.; Rango, A. Image Processing and Classification Procedures for Analysis of Sub-decimeter Imagery Acquired with an Unmanned Aircraft over Arid Rangelands. GISci. Remote Sens. 2011, 48, 4–23. [Google Scholar] [CrossRef] [Green Version]
  85. Feng, Q.; Liu, J.; Gong, J. UAV Remote Sensing for Urban Vegetation Mapping Using Random Forest and Texture Analysis. Remote Sens. 2015, 7, 1074–1094. [Google Scholar] [CrossRef] [Green Version]
  86. Key, T. A Comparison of Multispectral and Multitemporal Information in High Spatial Resolution Imagery for Classification of Individual Tree Species in a Temperate Hardwood Forest. Remote Sens. Environ. 2001, 75, 100–112. [Google Scholar] [CrossRef]
  87. Zhu, X.; Liu, D. Accurate mapping of forest types using dense seasonal Landsat time-series. ISPRS J. Photogramm. Remote Sens. 2014, 96, 1–11. [Google Scholar] [CrossRef]
  88. Crimmins, M.A.; Crimmins, T.M. Monitoring Plant Phenology Using Digital Repeat Photography. Environ. Manag. 2008, 41, 949–958. [Google Scholar] [CrossRef] [PubMed]
  89. Cole, E.F.; Sheldon, B.C. The shifting phenological landscape: Within- and between-species variation in leaf emergence in a mixed-deciduous woodland. Ecol. Evol. 2017, 7, 1135–1147. [Google Scholar] [CrossRef] [Green Version]
  90. Klosterman, S.; Melaas, E.; Wang, J.; Martinez, A.; Frederick, S.; O’Keefe, J.; Orwig, D.A.; Wang, Z.; Sun, Q.; Schaaf, C.; et al. Fine-scale perspectives on landscape phenology from unmanned aerial vehicle (UAV) photography. Agric. For. Meteorol. 2018, 248, 397–407. [Google Scholar] [CrossRef]
  91. Qiu, S.; He, B.; Yin, C.; Liao, Z. Assessments of sentinel-2 vegetation red-edge spectral bands for improving land cover classification. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W7, 871–874. [Google Scholar] [CrossRef] [Green Version]
  92. Macintyre, P.; van Niekerk, A.; Mucina, L. Efficacy of multi-season Sentinel-2 imagery for compositional vegetation classification. Int. J. Appl. Earth Obs. Geoinformation 2020, 85, 101980. [Google Scholar] [CrossRef]
  93. Ottosen, T.-B.; Petch, G.; Hanson, M.; Skjøth, C.A. Tree cover mapping based on Sentinel-2 images demonstrate high thematic accuracy in Europe. Int. J. Appl. Earth Obs. Geoinf. 2020, 84, 101947. [Google Scholar] [CrossRef]
  94. Milas, A.S.; Arend, K.; Mayer, C.; Simonson, M.A.; Mackey, S. Different colours of shadows: Classification of UAV images. Int. J. Remote Sens. 2017, 38, 3084–3100. [Google Scholar] [CrossRef]
  95. Maschler, J.; Atzberger, C.; Immitzer, M. Individual Tree Crown Segmentation and Classification of 13 Tree Species Using Airborne Hyperspectral Data. Remote Sens. 2018, 10, 1218. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Study area location relative to New Hampshire and surrounding New England states. Red boundary indicates the area covered by the UAS for all data collections.
Figure 1. Study area location relative to New Hampshire and surrounding New England states. Red boundary indicates the area covered by the UAS for all data collections.
Remotesensing 13 02631 g001
Figure 2. The three best single- and multi-date image combinations based on the overall accuracy for (a) the normal color Aeria sensor and (b) the multispectral Parrot Sequoia sensor. The bars are grouped by the number of images in the image stack.
Figure 2. The three best single- and multi-date image combinations based on the overall accuracy for (a) the normal color Aeria sensor and (b) the multispectral Parrot Sequoia sensor. The bars are grouped by the number of images in the image stack.
Remotesensing 13 02631 g002
Figure 3. Producer’s accuracy, user’s accuracy, and F-measure for all Aeria classifications. The values on the x-axis are the index values assigned to each classification (see Table 3). The y-axis is the species abbreviation (see Table 2).
Figure 3. Producer’s accuracy, user’s accuracy, and F-measure for all Aeria classifications. The values on the x-axis are the index values assigned to each classification (see Table 3). The y-axis is the species abbreviation (see Table 2).
Remotesensing 13 02631 g003
Figure 4. Producer’s accuracy, user’s accuracy, and F-measure for all Sequoia classifications. The values on the x-axis are the index values assigned to each classification (see Table 3). The y-axis is the species abbreviation (see Table 2).
Figure 4. Producer’s accuracy, user’s accuracy, and F-measure for all Sequoia classifications. The values on the x-axis are the index values assigned to each classification (see Table 3). The y-axis is the species abbreviation (see Table 2).
Remotesensing 13 02631 g004
Figure 5. Comparison between the best performing Aeria and Sequoia combinations, indicated on the label, within each combination group. The asterisk (*) indicates at least one statistically significant comparison at the 95% confidence level over 30 iterations. The value in parenthesis presents the number of iterations in which the pairing was significantly different.
Figure 5. Comparison between the best performing Aeria and Sequoia combinations, indicated on the label, within each combination group. The asterisk (*) indicates at least one statistically significant comparison at the 95% confidence level over 30 iterations. The value in parenthesis presents the number of iterations in which the pairing was significantly different.
Remotesensing 13 02631 g005
Figure 6. Feature importance values for the (a) Aeria and (b) Sequoia sensors. Feature importance was measured as the decrease in overall accuracy of the baseline model when that band was removed from the model. Positive values indicate a decrease in accuracy while negative values indicate an increase is accuracy.
Figure 6. Feature importance values for the (a) Aeria and (b) Sequoia sensors. Feature importance was measured as the decrease in overall accuracy of the baseline model when that band was removed from the model. Positive values indicate a decrease in accuracy while negative values indicate an increase is accuracy.
Remotesensing 13 02631 g006
Table 1. Camera specifications for the Sensefly Aeria X and Parrot Sequoia.
Table 1. Camera specifications for the Sensefly Aeria X and Parrot Sequoia.
Aeria XParrot Sequoia MSS
ShutterGlobalGlobal
SensorAPS-CMultispectral sensor
Resolution24 MP1.2 MP
Focal length18.5 mm3.98 mm
Spectral bands with ranges Blue
Green
Red
Green (510 nm–590 nm)
Red (620 nm–700 nm)
Red edge (725 nm–745 nm)
Near infrared (750 nm–830 nm)
Table 2. Collection dates for each sensor with seasonal descriptions. The description is based on regional trends in phenology and not on any particular date ranges.
Table 2. Collection dates for each sensor with seasonal descriptions. The description is based on regional trends in phenology and not on any particular date ranges.
SeasonAeria X (RGB)Parrot Sequoia (MSS)
Early spring26 April 202028 April 2020
Mid-spring16 May 201915 May 2020
Late spring30 May 201929 May 2020
Early summer12 June 201910 June 2020
Mid-summer27 June 202026 June 2020
Table 3. Scientific and common names of tree species classified in this study.
Table 3. Scientific and common names of tree species classified in this study.
Scientific NameCommon NameAbbreviation
Fagus grandifoliaAmerican beechab
Betula lentaBlack birchbb
Quercus velutinaBlack oakbo
Tsuga canadensisEastern hemlockeh
Betula papyriferaPaper birchpb
Populus grandidentataBigtooth aspenpg
Populus tremuloidesQuaking aspenqa
Acer rubrumRed maplerm
Quercus rubraRed oakro
Carya ovataShagbark hickorysh
Acer saccharumSugar maplesm
Fraxinus americanaWhite ashwa
Pinus strobusWhite pinewp
Table 4. All single- and multi-date image stacks for classification grouped by the number of dates included and indicated on the far-left. The index column is a unique identifier assigned to each combination within a sensor.
Table 4. All single- and multi-date image stacks for classification grouped by the number of dates included and indicated on the far-left. The index column is a unique identifier assigned to each combination within a sensor.
IndexAeriaSequoia
One Date14-26-204-28-20
25-16-195-15-20
35-30-195-29-20
46-12-196-10-20
56-27-206-26-20
Two Dates64-26-20 + 5-16-194-28-20 + 5-15-20
74-26-20 + 5-30-194-28-20 + 5-29-20
84-26-20 + 6-12-194-28-20 + 6-10-20
94-26-20 + 6-27-204-28-20 + 6-26-20
105-16-19 + 5-30-195-15-20 + 5-29-20
115-16-19 + 6-12-195-15-20 + 6-10-20
125-16-19 + 6-27-205-15-20 + 6-26-20
135-30-19 + 6-12-195-29-20 + 6-10-20
145-30-19 + 6-27-205-29-20 + 6-26-20
156-12-19 + 6-27-206-10-20 + 6-26-20
Three Dates164-26-20 + 5-16-19 + 5-30-194-28-20 + 5-15-20 + 5-29-20
174-26-20 + 5-16-19 + 6-12-194-28-20 + 5-15-20 + 6-10-20
184-26-20 + 5-16-19 + 6-27-204-28-20 + 5-15-20 + 6-26-20
194-26-20 + 5-30-19 + 6-12-194-28-20 + 5-29-20 + 6-10-20
204-26-20 + 5-30-19 + 6-27-204-28-20 + 5-29-20 + 6-26-20
214-26-20 + 6-12-19 + 6-27-204-28-20 + 6-10-20 + 6-26-20
225-16-19 + 5-30-19 + 6-12-195-15-20 + 5-29-20 + 6-10-20
235-16-19 + 5-30-19 + 6-27-205-15-20 + 5-29-20 + 6-26-20
245-16-19 + 6-12-19 + 6-27-205-15-20 + 6-10-20 + 6-26-20
255-30-19 + 6-12-19 + 6-27-205-29-20 + 6-10-20 + 6-26-20
Four Dates264-26-20 + 5-16-19 + 5-30-19 + 6-12-194-28-20 + 5-15-20 + 5-29-20 + 6-10-20
274-26-20 + 5-16-19 + 5-30-19 + 6-27-204-28-20 + 5-15-20 + 5-29-20 + 6-26-20
284-26-20 + 5-16-19 + 6-12-19 + 6-27-204-28-20 + 5-15-20 + 6-10-20 + 6-26-20
294-26-20 + 5-30-19 + 6-12-19 + 6-27-204-28-20 + 5-29-20 + 6-10-20 + 6-26-20
305-16-19 + 5-30-19 + 6-12-19 + 6-27-205-15-20 + 5-29-20 + 6-10-20 + 6-26-20
All314-26-20 + 5-16-19 + 5-30-19 + 6-12-19 + 6-27-204-28-20 + 5-15-20 + 5-29-20 + 6-10-20 + 6-26-20
Table 5. Results of the mono- versus multi-temporal kappa comparisons for the Aeria and Sequoia. The date combination with the highest overall accuracy within each sensor was used for each comparison. The value represented the number of iterations out of 30 that were found to be significantly different at the 95% confidence level.
Table 5. Results of the mono- versus multi-temporal kappa comparisons for the Aeria and Sequoia. The date combination with the highest overall accuracy within each sensor was used for each comparison. The value represented the number of iterations out of 30 that were found to be significantly different at the 95% confidence level.
Number Significant
ComparisonAeriaSequoia
One date vs. two dates3030
Two dates vs. three dates53
Three dates vs. four dates00
Four dates vs. five dates00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Grybas, H.; Congalton, R.G. A Comparison of Multi-Temporal RGB and Multispectral UAS Imagery for Tree Species Classification in Heterogeneous New Hampshire Forests. Remote Sens. 2021, 13, 2631. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132631

AMA Style

Grybas H, Congalton RG. A Comparison of Multi-Temporal RGB and Multispectral UAS Imagery for Tree Species Classification in Heterogeneous New Hampshire Forests. Remote Sensing. 2021; 13(13):2631. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132631

Chicago/Turabian Style

Grybas, Heather, and Russell G. Congalton. 2021. "A Comparison of Multi-Temporal RGB and Multispectral UAS Imagery for Tree Species Classification in Heterogeneous New Hampshire Forests" Remote Sensing 13, no. 13: 2631. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132631

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop