Next Article in Journal
The Global Mangrove Watch—A New 2010 Global Baseline of Mangrove Extent
Next Article in Special Issue
Improving the Reliability of Mixture Tuned Matched Filtering Remote Sensing Classification Results Using Supervised Learning Algorithms and Cross-Validation
Previous Article in Journal
Laboratory Measurements of Subsurface Spatial Moisture Content by Ground-Penetrating Radar (GPR) Diffraction and Reflection Imaging of Agricultural Soils
Previous Article in Special Issue
Individual Tree Crown Segmentation and Classification of 13 Tree Species Using Airborne Hyperspectral Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ultra-Light Aircraft-Based Hyperspectral and Colour-Infrared Imaging to Identify Deciduous Tree Species in an Urban Environment

by
Gintautas Mozgeris
1,*,
Vytautė Juodkienė
1,
Donatas Jonikavičius
1,
Lina Straigytė
2,
Sébastien Gadal
3 and
Walid Ouerghemmi
3
1
Institute of Forest Management and Wood Science, Aleksandras Stulginskis University, Studentų Str. 11, LT-53361 Akademija, Kaunas district, Lithuania
2
Institute of Forest Biology and Silviculture, Aleksandras Stulginskis University, Studentų Str. 11, LT-53361 Akademija, Kaunas district, Lithuania
3
Aix Marseille Univ, Université Côte d’Azur, Avignon Université, CNRS, ESPACE, UMR 7300, Avignon, 13545 Aix-en-Provence, France
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(10), 1668; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10101668
Submission received: 8 August 2018 / Revised: 16 October 2018 / Accepted: 19 October 2018 / Published: 22 October 2018

Abstract

:
One may consider the application of remote sensing as a trade-off between the imaging platforms, sensors, and data gathering and processing techniques. This study addresses the potential of hyperspectral imaging using ultra-light aircraft for vegetation species mapping in an urban environment, exploring both the engineering and scientific aspects related to imaging platform design and image classification methods. An imaging system based on simultaneous use of Rikola frame format hyperspectral and Nikon D800E adopted colour infrared cameras installed onboard a Bekas X32 manned ultra-light aircraft is introduced. Two test imaging flight missions were conducted in July of 2015 and September of 2016 over a 4000 ha area in Kaunas City, Lithuania. Sixteen and 64 spectral bands in 2015 and 2016, respectively, in a spectral range of 500–900 nm were recorded with colour infrared images. Three research questions were explored assessing the identification of six deciduous tree species: (1) Pre-treatment of spectral features for classification, (2) testing five conventional machine learning classifiers, and (3) fusion of hyperspectral and colour infrared images. Classification performance was assessed by applying leave-one-out cross-validation at the individual crown level and using as a reference at least 100 field inventoried trees for each species. The best-performing classification algorithm—multilayer perceptron, using all spectral properties extracted from the hyperspectral images—resulted in a moderate classification accuracy. The overall classification accuracy was 63%, Cohen’s Kappa was 0.54, and the species-specific classification accuracies were in the range of 51–72%. Hyperspectral images resulted in significantly better tree species classification ability than the colour infrared images and simultaneous use of spectral properties extracted from hyperspectral and colour infrared images improved slightly the accuracy over the 2015 image. Even though classifications using hyperspectral data cubes of 64 bands resulted in relatively larger accuracies than with 16 bands, classification error matrices were not statistically different. Alternative imaging platforms (like an unmanned aerial vehicle and a Cessna 172 aircraft) and settings of the flights were discussed using simulated imaging projects assuming the same study area and field of application. Ultra-light aircraft-based hyperspectral and colour-infrared imaging was considered to be a technically and economically sound solution for urban green space inventories to facilitate tree mapping, characterization, and monitoring.

Graphical Abstract

1. Introduction

Remote sensing is the science and art of gathering information about an object of interest through analysing data acquired by a sensor that is not in contact with the object [1]. Thus, we must accept the necessity of dealing with very diverse platforms, sensors, data processing approaches, motivations, and experiences. Thus, the operational application of remote sensing can be understood as a process of trade-offs among the spectral, temporal, and spatial properties of captured images [2] bearing in mind the quality, speed, and price of the final product. Numerous fields of remote sensing applications are supported by an increasing volume of scientific research, aimed at improving the quality of the acquired information. The current study investigated the use of cost-efficient remote sensing solutions in a very specific activity, which could, in principle, be implemented by human-survey based techniques. Specifically, we discuss the ease and cost-effectiveness potential of using remotely sensed data to facilitate urban green space inventories and monitoring. The role of green spaces in an urban environment and the importance of their management for society is difficult to underestimate [3]. Trees are important as they regulate urban climates, air and acoustic pollution, accumulate CO2, provide cultural services through recreation and education, and deliver supporting services for human well-being and habitats for biodiversity, etc. [4,5,6,7,8].
The quality and quantity of services and other benefits that trees can deliver need to be assessed and monitored, delivering characteristics of available trees and information on their development, including their responses to changing climate conditions, resistance to pathogenic agents, species invasiveness, etc. [9,10,11,12,13,14,15,16]. Thus, to keep such information up to date and to use it for spatial planning and management, inventories of green spaces in urban environments have been introduced, e.g., the urban green space inventory in Kaunas City, Lithuania, first launched in 2012, which was chosen as a study area for our research [17]. It aimed to map and catalogue individual trees, small groups of trees, and alleys, and estimate their dendrometric characteristics, as well as the health condition of trees to support spatial planners in making effective management decisions. When summarising the experiences gained in undertaking the urban green space inventories and monitoring, numerous issues need to be mentioned, such as the high inventory costs, the lack of experienced personnel or level of professionalism of the involved personnel, the accessibility of “fenced” areas, insufficient resources to undertake further monitoring, and database updating—these issues have been reported also in other cities [14,15,18]. Thus, this indicates the need to look for new techniques to build an inventory, and to monitor green spaces in urban environments. Remote sensing is considered as such a technique and has the potential to make the procedures more reliable, cheap, and less labour intensive, delivering spatially explicit results and providing the opportunity to estimate properties of inventoried objects that cannot be objectively seen by the human eye.
The application of remote sensing based on the use of satellite images and aerial photography has already long been operational in inventorying trees or groups of trees in cities [8,18,19,20,21,22,23,24]. Trees have also been manually mapped using orthographic maps originating from aerial photography in Lithuanian inventories of urban trees [17]. However, the methodological framework for the urban application of remote sensing is primarily taken from conventional forest inventories, which may fail in different environments. As the focus in cities is on individual trees rather than on forest stands, the number of three species is much larger and there are different tree characteristics needed, e.g., the total number of tree species identified in Lithuanian stand-wise forest inventory is 29, with just eight species considered as being important for forest management [25]; however, there are 147 taxa of woody plants—96 tree species, 49 shrubs, and two vines—in public parks of Kaunas City, which need to be treated as equally important in the management of urban green spaces. Thus, accepting that multi-spectral data has proven to be successful in the classification of forest types or species in conventional forest inventories, we need to look for solutions to improve the performance of remote sensing when dealing with a variety of tree species. As a solution, we consider hyperspectral imaging.
Hyperspectral remote sensing collects reflectance data in narrow spectral bands (~10 nm or less) across a wide spectral range [26]. Due to the high spectral resolution, hyperspectral sensors enhance the ability to obtain information facilitating the discrimination of species and prediction of dendrometric and even physiological, chemical, and genetic characteristics of plants [27,28,29,30,31,32,33]. Hyperspectral imaging has also been successful for tree species mapping in urban areas [34,35,36]. Processing methods used for mapping vegetation species can be divided into two groups: Pixel-based approaches and object-based approaches. In the first case, each pixel of remotely sensed imagery is associated with a certain class that represents the objects being analysed. In the context of hyperspectral imagery, the images are first pre-treated to improve the image quality or reduce redundant dimensionality. Then, pixel-level classification is applied using a variety of supervised or unsupervised algorithms allocating each pixel to a certain class according to the properties of the initial or transformed (or both) feature spaces [37,38,39,40]. The object-based approach firstly involves image segmentation to a group of spectrally similar and spatially proximate pixels into some objects, followed by the specification or classification of the object. Objects are delineated manually or using some computer-based segmentation algorithm. Then, the objects are classified according to their spectral, textural, and contextual properties [41,42,43,44,45]. Advantages of object-based approaches over the pixel-based ones are reported in several studies, especially involving the use of very high spatial resolution imagery [41,44,45]. Then, among numerous methodological approaches, machine-learning classifiers have already shown good potential for vegetation mapping [39,46], and may be considered as a good trade-off between basic statistical/distance-based classifiers and deep learning ones [47,48], requiring massive training data and large computational time [49]. The classification accuracy is usually improved using the fusion of hyperspectral imagery with other remotely-sensed data, like airborne laser scanning and high spatial resolution colour-infrared aerial digital imagery [36,46,50]. However, most previous studies used high level hyperspectral imaging equipment (e.g., Multispectral Infrared Visible Imaging Spectrometer (MIVIS), Airborne Visible InfraRed Imaging Spectrometer (AVIRIS), HyMap, or HySpex cameras), which share some common features. Namely, they are relatively expensive, both in terms of installation and operation costs, require the availability of high-performance equipment, including the sensor and platform installation, as well as highly-skilled operators, and are applicable under strictly specified weather, flying, and data processing conditions.
The rapidly developing technology and use of unmanned aerial vehicles (UAVs), which provide images with ultrahigh spatial resolution at a user-defined spatio-temporal scale and under poor imaging conditions, simultaneously being cheap and relatively easy to operate even for unprofessional, but numerous, users may be a promising alternative to overcome the above-mentioned issues [51,52]. There are reports on the use of UAVs as a platform for the mapping of urban trees [53,54]. UAVs can be equipped with hyperspectral sensors too [46,55,56]. However, the limitations of UAVs, in the context of their urban application, remain their operational safety restrictions, limited endurance and payload, and the vast number of small-format digital images acquired at different lighting conditions and flight altitudes, thus requiring tough photogrammetric and radiometric processing before extracting the information. In this study, the use of an ultra-light aircraft (ULA) equipped with a relatively cheap hyperspectral sensor is suggested as a potential alternative to UAV. For the ULA, we consider an aircraft with a maximum take-off weight of 450 kg and a maximum stall speed of 65 km/h. This solution has already proven its potential in forest monitoring [57] and precision agriculture [58] related applications.
The current study addresses the potential of hyperspectral imaging using ultra-light aircraft for vegetation species mapping. It explores both the engineering and scientific aspects related to acquisition platform design and to object-based image classification methods. Several machine-learning classifiers are compared focusing on their performance using the proposed image acquisition solution. We hypothesize that the hyperspectral imagery outperforms the colour-infrared imagery in the context of urban tree inventory-focused applications under certain conditions and assuming similar levels of costs involved. We discuss that the use of ULAs enables more flexibility in trade-offs to be made between the properties of images acquired compared to other imaging solutions. The paper is organized as follows: In Section 2, the materials and methods are presented, including the introduction of imaging system, used data, and image processing methods. In Section 3, the results of mapping six deciduous tree species over our study area in Kaunas city, Lithuania, are introduced. In Section 4, we discuss and explain the results. Conclusions are drawn in Section 5. Finally, we specify the parameters used with different classification methods in Appendix A.

2. Material and Methods

2.1. Study Area

The study was carried out in the city of Kaunas, which is located in the central part of Lithuania (54°53′50″ N, 23°53′10″ E) at the confluence of the Neris and Nemunas rivers. Green spaces, such as city forests, parks, groves, gardens, and agricultural areas, cover 53% of the total area (~15,700 ha) of the city. The climate is humid continental, with an average annual temperature of approximately 6 °C, and the population is around 300,000. The ground surface is relatively flat, with some depressions along the rivers, and with an average of ~100 m above sea level.

2.2. Imaging System

The imaging system was installed on board a Bekas X32 aircraft (Lilienthal Aviation, Kharkiv, Ukraine), operated by the private aviation company, UAB Vilkas Avia. The aircraft used for the project (Figure 1a) was a two-seat, high-wing, single-engine pusher ultralight machine. The flight endurance of this aircraft is 6 h, with a normal fuel load and a cruising speed of 120 km/h. Normally, the speed used for imaging missions is 100 km/h. Other technical characteristics of the aircraft are length, 6.55 m; wingspan, 9.00 m; height, 2.00 m; wing area, 12.33 m2; maximum take-off weight, 495 kg; and powerplant, 1 × Rotax 912 (100 hp). To install the equipment, an originally developed aviation container was attached beneath the cockpit. The imaging system was based on the simultaneous use of Rikola hyperspectral (Senop Oy, Oulu, Finland, hereafter referred to as HSI) and Nikon D800E (Nikon Corporation, Shinagawa, Tokyo, Japan, hereafter CIR) cameras (Figure 1b), which were installed on an aluminium frame together with the XSENS MTI-G700 orientation sensor (Xsens Technologies B.V., Enschede, The Netherlands). EnsoMOSAIC flight management and camera control electronics (MosaicMill Oy, Vantaa, Finland) [59] together with a Novatel FlexPak6 dual-frequency GPS receiver (NovAtel Inc., Calgary, AB, Canada) were installed on the same frame. NavCam v7.1.0.0 software (MosaicMill Oy, Vantaa, Finland) was used for the flight and camera control.
The Rikola HSI camera, which is based on a tuneable Fabry–Pérot interferometer, provides a real spectral response in each pixel over the range of 500–900 nm, i.e., no interpolation is used [60]. Sixteen programmable consecutive bands (flying mode using 1024 × 1024 pixels frame) with a full width at a half maximum (FWHM) of ~10 nm were obtained in 2015; however, in 2016, the camera was upgraded by the manufacturer to deliver up to 65 bands. The camera opening angle for Rikola is 37°. The broadband Nikon D800E CIR camera used for this project was converted and reconfigured to capture in the near–infrared (770–950 nm), red + near–infrared (550–850 nm), and green + near–infrared (530–630 nm) bands. The Nikon D800E camera was equipped with 28 mm lenses for the project. The image width and length in pixels was 7360 × 4912, and the camera opening angle was −63°.

2.3. Imaging Missions and Image Processing

The total area of aerial imaging was around 4000 ha. Two imaging missions were conducted (Figure 2): 17 July 2015 from 11:55 h (UTC) to 12:46 h (UTC), with sun angles of 52.75–46.80 degrees, and 11 September 2016 from 09:15 h (UTC) to 10:41 h (UTC), with sun angles of 33.48–38.74 degrees. The National Oceanic and Atmospheric Administration (NOAA) Solar calculator [61] calculated the apparent position of the sun at the time of image acquisition. The flight characteristics for both missions were planned to be the same, i.e., the same flight plan was used. Flight planning was done in ArcGIS v10.6 (Esri Inc., Redlands, CA, USA) using EnsoMosaic FlyPlan v7 software (MosaicMill Oy, Vantaa, Finland). Flight parameters were exported to the formats required by the NavCam v7.1.0.0 software used for the flight control. The flight altitude above the ground level was 1000–1100 m. As both cameras were triggered at the same time, the side and forward overlaps differed. HSI images were captured with 50% sine and 65% forward overlaps, and the figures for CIR images were, respectively, 70% and 70%. The nominal resulting spatial resolution of the acquired original images was 0.65 m for HSI and 0.17 m for CIR. The weather conditions during both flights were quite similar; however, cumulus clouds were present during the first mission, resulting in some shadowed spots on the images. Such images were used for photogrammetric processing to obtain continuous mosaics; however, they were excluded from further tree identification-related studies.
To compare alternative imaging equipment and flight settings, we simulated the image acquisition for the same area using EnsoMosaic FlyPlan v7 software in ArcGIS v10.6 and a calculator of basic aerial photography parameters in MS Excel. Only platforms that we had experience using with the Rikola hyperspectral camera were considered, i.e., we discussed the performance of an originally developed eight rotor wing UAV [58] and a Cessna 172 aircraft [63]. Assuming numerous potential options to compare, we restricted the discussion to the most relevant imaging alternatives for HSI and CIR applications in inventories of urban green spaces. We considered only imaging settings with a frame interval equal to or above 4 s, assuming the maximum time needed to store the captured HSI images.
The major difference between the 2015 and 2016 missions was the number of spectral bands used for the HSI images. Sixteen spectral bands at approximately every 25 nm starting from 503 nm were recorded in 2015, while the number of bands was increased up to 64 in 2016 (at approximately every 5 nm). The spectral settings of the HSI camera used in both missions are given in Table 1. Wavelength sequences defining the central wavelength and the full width at half maximum (FWHM) combinations and the band imaging order were created using Rikola hyperspectral imager software v2.1 and uploaded to an initialized memory card of the camera. A dark reference image was always captured manually before each flight with the lenses covered with the same exposure times and band sequences as used for the mission.
In addition to two types of images (i.e., HSI and CIR), GPS, and orientation sensor readings and the time of each exposure were downloaded for further photogrammetric processing. The Rikola camera produced hypercubes in its own format together with associated metadata, along with the task file with information needed to compile the hypercube from the raw data. ViewNX2 v2.10.3 software by Nikon was used to process the raw captured CIR images into tagged image file format (TIFF) files.
Captured Rikola images from both missions were pre-processed using the tools of the camera manufacturer. First, the vignetting effect and noise were removed, and digital number values were converted into radiance (Watts/(m2 × srad × μm)). Then, all bands in every frame were co-registered using Coregistering v1.1 software by Rikola and converted into TIFF files. This procedure involves dealing with non-aligned spectral bands due to the time-sequential imaging principle, i.e., each band of the data cube has a different position and orientation, and, assuming 10 ms exposure time for each band used, the shift between the 1st and 64th bands may amount to 40 m on the ground. Special treatments to obtain the co-registered Rikola bands have been suggested in other studies, like independent photogrammetric processing of all bands and co-registration of mosaics [58]; however, we have observed that with relatively low aircraft speeds and high imaging altitudes the Coregistering v1.1 software works well in forested and urban areas with dense vegetation. Nevertheless, the co-registration quality was manually checked on all images and bands beforehand to proceed with further processing. We believe that band co-registration issues have not been larger than one pixel around the studied crowns. All photogrammetric processing (including aerial triangulation, creation of the surface model for orthorectification, and mosaicking) was done using EnsoMOSAIC v7.6.0.1 software [63]. The pixel size of the resulting mosaic was 0.7. The geometric quality of achieved mosaics was checked using orthophoto maps for Kaunas City available from the Spatial Information Portal of Lithuania [64]. Atmospheric correction was implemented in the ENVI v5.3 (Harris Geospatial Solutions, Inc., Broomfield, CO, USA) module FLAASH, based on a modified version of the MODTRAN4 radiation transfer code [65]. The 820 nm water feature and mid-latitude summer atmospheric model were used with no predefined aerosol model. The aerosol amount in air was estimated by visibility, which was set to 40 km. The minimum noise fraction (MNF) procedure [66] was applied to decorrelate and denoise the data.
Vignetting effect removal and filtering of spectral bands on CIR images was carried out using RapidToolbox32 v2.2.20 software (PIEneering Oy, Helsinki, Finland) to obtain clear near–infrared (~750–940 nm), red (~560–760 nm), and green (~470–630 nm) bands, i.e., to remove the near–infrared (NIR) portion in the visible spectrum. Other photogrammetric treatments of CIR images were conducted in the same way as for HSI images. The pixel size of resulting CIR mosaics was 0.2 m.

2.4. Tree Data Collection

The initial data source to locate the trees in the field was information available from the inventory of green spaces in Kaunas City [67]. First, the trees were identified on the mosaic using materials from the year 2015 imaging mission (first on HSI and then on CIR), and their crown projections were manually digitized in ArcGIS v10.6 and stored as polygons in a geographic information systems (GIS) database, providing the tree attributes were compatible with the ones in the database from the inventory of Kaunas green spaces. Then, the preliminary lists of tree species of interest were created, with the aim of obtaining a sufficient number of trees concentrated in the central part of Kaunas. Finally, the trees were checked in the field by an experienced dendrologist to confirm whether the tree identification was correct and to make any adjustments, if needed. Only six tree species with approximately 100 or more crowns identified on one mosaic were included in further studies (Table 2). The same tree crowns were appropriately detected, identified, and validated on both 2016 mosaics.

2.5. Classification Approaches and Accuracy Assessment

The spectral features were extracted for zones corresponding to each crown projection polygon. They were manually digitized and stored in a GIS database, using the Zonal Statistics tool of ArcGIS v10.6 software, as the pixel mean, minimum, maximum, and standard deviation for each image band. As the majority of crowns received spectral features from CIR and HSI images, we further investigated the classification performance using CIR and HSI images separately as well as using spectral features from CIR and HSI images together, which we later called fused CIR and HSI images.
The number of spectral features (ranging from 12 in the CIR images to 256 in the HSI images from the 2016 mission) with potentially different prediction abilities and potentially correlated between each other were acquired. Several approaches were tested to investigate the effects of feature selection on the classification:
  • No feature selection, i.e., all extracted features were used in the classification. Hereafter, this feature selection approach is referred to as “all spectral features”.
  • Spectral features were processed using a principal components (PC) analysis and transformation of the data in conjunction with a Ranker search, installed in Weka v3.8.2 software [68]. The number of PCs used in the classification was determined by choosing enough eigenvectors to account for 95% of the variance in the original data. The number of PCs used was 4 and 3 with CIR images (for the 2015 and 2016 missions, respectively), 7 and 9 with HSI, and 7 and 11 with fused CIR and HSI spectral properties. Hereafter, this feature selection approach is referred to as “principal components”.
  • Correlation-based feature selection following the approach described in reference [46], which used a similar sensor in their study (hereafter, “correlation-based feature selection”). This approach is suggested for creating feature subsets correlating highly with the class value, but with low internal correlation between individual features. Weka v.3.8.2 software was used to perform the feature selection. It allows several search algorithms to be utilised to evaluate the feature subsets. We applied three different search methods, as in reference [46]:
    • GeneticSearch, which is based on using the simple genetic algorithm to perform the search;
    • BestFirst, which searches the space of attribute subsets by greedy hill climbing augmented with a backtracking facility; and
    • GreedyStepWise, which performs a greedy forward or backward search through the space of attribute subsets.
First, each feature selection method was tested using 10-fold cross-validation. For the best features in each method, the ones, which were selected at least eight times in the 10 runs, were chosen. Finally, the set of features used in classification was created from the features listed among the best ones at least twice. Additionally, we conducted the feature selection using the GeneticSearch only. However, this approach did not result in better results than feature selection using all three methods; thus, we do not discuss it further.
Five classification algorithms—machine learning classifiers—were tested in our study for the accuracy of identification of the mentioned six deciduous tree species: (1) Simple Bayes methods: Naive Bayes [69]; (2) instance-based: k-nearest neighbours (k-NN) [70]; (3) ensemble method: RandomForest [71]; (4) non-linear methods: Multilayer perceptron (MLP) [72]; and (5) decision trees: C 4.5 [73].
Weka v3.8.2 software was used to conduct the classification. The parameters used with different classification methods are summarized in Appendix A. To choose the k value for the k-NN classification algorithm, we tested options ranging from 1 to 20. The best classification performance was achieved using a k value of 10, practically irrespective of the input data; thus, all classification tests using the instance-based algorithm are based on k = 10.
Leave-one-out cross-validation was used to check the classification performance. The validation statistics used to evaluate the general accuracy and reliability of the classification model were overall accuracy and Cohen’s Kappa:
K a p p a = O b s e r v e d   a c c u r a c y E x p e c t e d   a c c u r a c y 1 E x p e c t e d   a c c u r a c y ,
O b s e r v e d   a c c u r a c y = O v e r a l   a c c u r a c y = t p N ,
E x p e c t e d   a c c u r a c y = i = 1 k n t i N × n c i N ,
where tp refers to the number of samples predicted to be positive that are, in fact, positive; k refers to the number of classes; nti refers to the number of samples truly in class i; nci refers to the number of samples assigned to class i; and N refers to the total number of samples.
The interpretation of Cohen’s Kappa was taken from reference [74], as follows: Under 0 “poor”, 0–0.2 “slight”, 0.2–0.4 “fair”, 0.4–0.6 “moderate”, 0.6–0.8 “substantial”, and 0.8–1.0 “almost perfect”.
Class-specific classification performance was evaluated using precision (producer’s accuracy), recall (user’s accuracy), the F-score (the harmonic mean of recall and precision), and confusion matrices:
P r e c i s i o n = t p t p + f p ,
where fp refers to the number of samples predicted positive that are, in fact, negative;
R e c a l l = t p t p + f n ,
where fn refers to the number of samples predicted negative that are, in fact, positive; and
  F - score = 2 × R e c a l × P r e c i s i o n R e c a l + P r e c i s i o n  
The Z statistic was used to test whether two classification error matrices were statistically different:
Z = | κ ^   1   κ ^   2 | v a r ( κ ^   1 ) + v a r ( κ ^   2 )   ,
where v a r ( κ ^   1 ) and v a r ( κ ^   2 )   refer to the variances of the respective matrices.

3. Results

The performance of classification algorithms to separate six deciduous tree species differed (Table 3); however, in practically all cases, multilayer perceptron resulted in the highest classification accuracies and achieved performance (i.e., Cohen’s Kappa overall classification accuracy values), if using all, or selected, bands. The use of the RandomForest classifier resulted in a usually lower classification accuracy than the multilayer perceptron, but higher than the application of the naïve Bayes, k-NN, and C 4.5 classifiers. All classifiers delivered similarly poor results if using principal component transformation as a solution to choose the most effective spectral features. Choosing eigenvectors to account for more than 95% of the variance in the original data did not affect the classification results. Thus, we focus on the multilayer perceptron when discussing other aspects of our study. We did not manage to achieve any improvement in classification accuracies by introducing the selection of spectral features, i.e., using all spectral features resulted in slightly better or similar classification accuracies than applying correlation-based feature selection. Generally, if using the best performing classification approach, the overall accuracy of classification of six deciduous tree species could be interpreted as being on the edge of fair and moderate if using CIR images, and moderate if using HSI images.
The use of hyperspectral Rikola images to separate deciduous tree species was more accurate than the use of CIR images—the differences were usually statistically significant (Table 4). When taking the best performing multilayer perceptron classification algorithm and using all extracted spectral features as the input, the Cohen’s Kappa values increased from 0.39 for CIR to 0.46 for HSI (2015 mission with 16-band HSI), and from 0.41 for CIR to 0.54 for HSI (2016 mission with 64-HSI bands). The fusion of HSI Rikola and CIR Nikon image data (using HSI and CIR spectral properties, extracted for the same tree crowns together) resulted in the highest overall classification accuracy for the 2015 mission case—the Cohen’s Kappa’s gain due to incorporating the CIR image data was 6% (i.e., from 0.46 to 0.49), although this was not statistically significant (Z = 0.74). However, no positive impact was observed for the 2016 mission. Classifications using HSI data achieved in 2016 with all extracted spectral features for 64 bands resulted in relatively larger accuracies than with the data from 2015 mission (16 bands)—the Cohen’s Kappas were 0.54 and 0.46. Nevertheless, the classification error matrices were not statistically different (Z = 1.88), even though they were not too far from being significant.
The species-specific results are summarized in Table 5 and illustrated in Figure 3. Here, we present the results achieved using the best performing approach, multilayer perceptron, and all extracted spectral features as the inputs. If using CIR images, only two species—Norway maple and small-leaved lime—were correctly classified on more than 50% of the instances, even though the average F-score was around 0.5, i.e., 0.478 for 2015 mission and 0.503 for 2016 mission. The relatively larger recall than precision for Norway maple and small-leaved lime could have been influenced by the slightly greater use of those species. Box elder and silver birch were the relatively worst classified tree species. None of the species had F-scores below 0.5 if HSI images were used for classification. Average F-score values increased up to 0.544 and 0.614 (for the 2015 and 2016 missions, respectively). In the last case, nearly every two out of three trees were identified correctly. Once again, the Norway maple and small-leaved lime were the best classified species. The F-scores for horse chestnut and box elder increased using 64 band HSI images compared with the 16 band case; however, the improvement of silver birch and black locust classifications were minor, leaving those two species with the lowest F-scores. For the 2016 HSI images, silver birch and black locust were the species most frequently intermixed with each other (11–17% of cases); 15% of black locust and 12% of horse chestnut tree crowns were wrongly classified as Norway maple, 18% of silver birch and 13% of horse chestnut tree crows were wrongly classified as small-leaved lime, and 10% of box elder crowns were wrongly classified as silver birch. Otherwise, omission and commission errors for individual species were below 10%. The use of spectral features extracted for the same crowns from HSI and CIR images together improved classification accuracies of some tree species only slightly; however, when spectral properties from HSI and CIR images acquired in 2016 were fused, silver birch and box elder identification had reduced F-scores compared to the HSI case.

4. Discussion

Direct comparison of achieved results of the tree species classification with other relevant studies is rather difficult due to the specificity of the study, including objects of interest, data acquisition, processing techniques, overall objectives of investigations, etc. Generally, the overall classification accuracies achieved here (63% at best) were somewhat poorer than, e.g., the 80–90% reported for identification of different numbers of tree species in the forest environment and the use of more advantageous hyperspectral imaging techniques, including the fusion of different remotely sensed data sources [35,39,45,75,76,77,78]. However, the choice of tree species to be analysed in our study could have automatically resulted in poorer overall classification accuracies. We considered six deciduous tree species. The differences in spectral reflectance properties among deciduous species are usually reported to be smaller than between deciduous and coniferous [79,80,81]. An example of mean spectra of all six deciduous tree species is presented in Figure 4. It is based on averaged pixel means of the 64-band HSI image inside the zones corresponding to crown projections. Low and similar among species reflectance values are observed in the visible part of the spectrum. Differences among the average reflectance values of species increase in near-infrared, as does the variance of crown-level means. Both species in the genus, Acer (Norway maple and box elder), have relatively large reflectance in the near-infrared portion, while the reflectance of small-leaved lime and horse chestnut is relatively the lowest. The potential to separate Scots pine, Norway spruce, and deciduous tree species considered as one class (i.e., not the individual deciduous tree species) using conventions for Lithuanian forest and urban green space inventories’ colour infrared aerial photos was reported previously [82]. Thus, we did not include coniferous tree species in our investigations, even though they could provide sufficient samples for investigations.
The species-specific classification results for the best performing case were in the range of 51–72% (based on the F-scores), with an average value of 61%. These results are similar to the results reported in the most compatible, however, still technologically superior, studies by, e.g., references [35] or [83]. The average classification accuracy of broadleaved deciduous identification among 15 broadleaved species on AVIRIS images was 74%. The second study reported the average user’s accuracy of six mainly deciduous tree species to be 61%; however, this was done using airborne multisensory data. The accuracies achieved in our study are better than the ones in reference [84], who tested the pixel-level classification with different HSI data pre-treatment approaches using the same images. The classification of seven slightly different tree species (with two coniferous species using a support vector machine) resulted in an overall classification accuracy of 46% with species-specific classification accuracies in the range of 17–62%.
A variety of classification algorithms have been discussed in other similar studies. Nowadays, deep learning is considered as a promising tool in the remote sensing image processing domain [85], however, such technique remains time consuming and training data are demanding compared to standard machine learning algorithms [47,49]. Thus, non-deep learning techniques still offer a good compromise in terms of ease of implementation and efficiency compared to deep learning ones. Among the machine-learning classifiers, the multilayer perceptron, when tested, has usually been reported to be the best performing approach [39,46]. The multilayer perceptron classification algorithm resulted in the best classification results also in our study. Using the RandomForest algorithm, the overall classification accuracies of CIR images were 16–21% and those of HSI images were 12–19% lower than with the multilayer perceptron, while the performance of the other three algorithms was poorer by approximately 30%. Similarly, as in reference [46], which inspired much of our methodological approach, no classification improvement was achieved using a selection of spectral features, i.e., all spectral features extracted resulted in the relatively best classification performance. Multilayer perceptron, especially when all spectral features were used, required a relatively larger processing time compared with other approaches (dozens of minutes vs. seconds using an ordinary office computer); however, we considered this limitation to be insignificant compared with the aircraft and image pre-processing times involved. Generally, the hyperspectral images resulted in better and usually statistically significant tree species classification than the CIR images. Even though the use of spectral properties extracted from HSI and CIR images together in the classification did not improve the accuracies significantly, we considered further imaging using two cameras, i.e., HSI and CIR/RGB, without using the CIR or RGB images for classification. This does not increase significantly the aircraft time (see later) nor the image processing time; however, panchromatically pan-sharpened HSI mosaics using the finer spatial resolution CIR images (Figure 5) facilitate human-based field inventories and validation.
The larger number of HSI bands (64) in the 2016 mission was due to a technical upgrade of the Rikola camera. This number refers to the maximal technical capacity of the camera under the square frame configuration. The 2015 mission was completed nearly two months earlier in the season. The majority of researchers have described the impact of the season on the spectral separation ability of tree species and the accuracies of classification using spectral data [2,27,86,87,88]. Changes in the spectral properties of tree leaves during the season were observed also in Lithuania [81,89]; however, they did not significantly influence the pattern of deciduous tree separation ability during July–September. Additionally, we tried to preserve the bands by using nearly the same central wavelength as in 2015 when planning the spectral settings for the 2016 mission. The health status of observed trees did not change notably during 2015–2016; thus, we explain the slightly better performance of the 2016 HSI data by the larger number of spectral bands used. Even though the improvement of the classification accuracy was not statistically significant, we aim to capture the maximum amount of HSI bands in the future, especially when the automatic co-registration of different spectral bands of the same cube using the tools provided by the camera manufacturer performs well.
Nevertheless, using 64 bands instead of 16 required a correspondingly larger memory card. HSI image data, acquired using the Rikola camera, are recorded on the memory card to pre-allocated files, e.g., 1905 pre-allocated files fit on a 64 GB memory card. The whole cube of 16 bands may be recorded to one pre-allocated file; however, four such files are needed to store 64 band cubes. Assuming that some images need to be captured on the ground (including capturing the dark reference image), a maximum of 1900 16-band images vs. 470 64-band images may be recorded on a 64 GB memory card. The maximum card capacity allowed by the Rikola camera is 128 GB, setting the limitation for the endurance of one imaging flight mission, as the memory card may be replaced on the ground using the current imaging solution. Our test area was chosen to be manageable with one flight mission.
The use of ultra-light aircraft could be considered a potential alternative to UAVs, and high-performance aerial imaging solutions, if specific applications were targeted. Our previous findings suggest that the ultralight-aviation strategy is cost effective for orthophoto mapping when the solid project area is less than 20,000–30,000 ha [57]. To discuss the influence of alternative imaging equipment and settings on the performance in urban green space related inventories, we covered only solutions with the Rikola hyperspectral camera included—originally developed for a rotor-wing UAV system, the ULA system and a solution with the same equipment as for the ULA, were installed on a Cessna 172 aircraft. Some key aircraft performance characteristics are displayed in Figure 6. If provided, characteristics for the CIR sensor assume independent imaging using a Nikon D800E camera with 28 mm lenses, i.e., without the Rikola camera. When the Rikola and Nikon D800E cameras were operated together, the flight used to be set according to the requirements for Rikola, i.e., they were too redundant for the Nikon D800E. The image resolution options here were directly related to the flying altitudes. The finest resolution was associated with the lowest flying altitude (the first point on the image for each platform if moving from the left to right), which, for UAVs, was equal to 120 m, or the maximum allowable altitude following legal regulations for UAVs in Lithuania. The three other points refer to altitudes of 150, 200, and 250 m, which are possible if certain flying conditions are coordinated with authorities. The finest resolution or lowest flying altitude for the ULA and Cessna 172 aircraft were based on the shortest frame interval, which was 4 s here. Thus, the flying altitudes for ULA were 600, 800, 1000, and 1500 m and, for the Cessna 172, 800, 1000, 1500, and 2000 m, respectively. Higher altitudes for UAVs are not discussed due to technical ceiling limitations. The limiting high-altitude factors for manned aircrafts include the excessively coarse image resolution for individual crown identification.
The same number of HSI (and CIR) images would need to be captured to cover the reference area using a ULA or Cessna 172, suggesting similar image processing costs. Nevertheless, the flying time needed would be less with the Cessna 172 than the ULA due to the larger aircraft velocity. However, if taking into consideration the 1 h flying costs with a Bekas X32 ULA and the Cessna 172 (provided by a private company, which operates both aircrafts), the total price of aerial imaging flights over the 4000 ha area with the ULA with the same image resolution would be 17–25% less than using the Cessna 172. However, aircraft movement during the exposition would be notably larger on images acquired using the Cessna 172 than the ULA (by 50% on 0.66 m resolution images). This would potentially increase the problem with non-aligned spectral bands within the same cube, potentially disabling automatic co-registration as performed in the current study. The extremely large number of images acquired and flying times/costs involved make the UAV approach hardly realistic for mapping areas of an order of several thousand hectares and achieving resolutions that are not needed in the context of green space inventories in urban environments. One could expect the performance of fixed wing-type UAVs to be somewhere between the rotor-wing UAV discussed here and the ULA; however, we did not discuss this option due to UAV flight restrictions without direct visual contact in Lithuania. In addition to the number of UAV images involving proportionally larger pre-processing times, the limitation of any small format aerial photography should be considered—namely, the relatively small land area covered by a single image and a limited number of distinguishing objects for photogrammetric processing [90]. The acquisition of CIR images would only involve lower flying costs. The image processing time would also be significantly reduced as some procedures, like co-registration and radiometric calibration, could be skipped or made less time consuming. However, the user would need to accept the lower information potential of the CIR images compared to HSI, thus questioning the advantages of CIR images over current operational solutions.

5. Conclusions

The inventories of urban green spaces in Lithuania are based nowadays on the use of true colour (since several years ago—also colour infrared) orthophoto maps, which are produced within the frames of general orthophoto mapping projects, and are supposed to meet the requirements of many users with different applications and interests. The solution proposed in this paper enables the inventories to order remotely sensed materials specifically adopted for their aims and with an enhanced information content. First of all, this study demonstrated the successful installation and simultaneous use of a frame-type hyperspectral Rikola camera and Nikon D800E camera, which was converted and reconfigured to capture in the NIR (770–950 nm), red + NIR (550–850 nm), and green + NIR (530–630 nm) bands, on-board manned ultra-light aircraft. The imaging and data processing systems, which are relatively cheap to acquire and operate, resulted in urban tree species identification accuracies that are compatible with the ones achieved by other researchers applying more technically advanced techniques. Moderate classification accuracies were achieved of the identification of six urban deciduous tree species (silver birch, horse chestnut, Norway maple, box elder, small-leaved lime, and black locust) using hyperspectral images and the best performing classification algorithm (multilayer perceptron). The overall classification accuracy was 63% and Cohen’s Kappa was 0.54. The species-specific harmonic means of producer’s and user’s accuracies for the best performing case were in the range of 51–72%. Out of five tested machine learning classifiers, the multilayer perceptron resulted in the relatively highest classification accuracies. We did not manage to improve the accuracies of pre-selecting the spectral features extracted from the images prior to the classification, i.e., all extracted features were used, including the minimum, maximum, mean, and standard deviation values of pixels inside the zones defined by crown projections in all image bands, resulting in the best performance.
Hyperspectral and colour-infrared images were simultaneously acquired using the tested imaging solution with the processing times to get photogrammetric products ready for extracting spectral properties from colour-infrared images being notably less than those from the hyperspectral images. However, the use of hyperspectral images resulted in a significantly better tree species classification ability. The simultaneous use of spectral properties extracted from hyperspectral and colour infrared images in the classification improved the accuracy for the 2015 image, while no improvements were noticed for the 2016 image. These results are encouraging and prove the usefulness of an additional source of information in the mapping process. In parallel, we considered the use of future acquisitions using two cameras to produce panchromatically pan-sharpened hyperspectral mosaics using the finer spatial resolution colour-infrared images. The two test imaging missions conducted in 2015 and 2016 differed in the number of spectral bands recorded by the hyperspectral Rikola camera, which corresponded to the technical ability of the sensor at the moment of use. Classifications using hyperspectral data cubes with 64 bands resulted in relatively larger accuracies than using cubes with 16 bands. Nevertheless, the classification error matrices were not statistically different.
We suggest that the use of ultra-light aircraft may be a winning alternative to unmanned aerial vehicles and high performance aerial imaging solutions if the area to be flown is in the range of thousands of hectares. This advantage becomes more obvious if the use of the frame-type Rikola hyperspectral camera is assumed. Considering urban green space inventory-related demands for hyperspectral imaging, the key arguments for the use of ultra-light aircrafts as the platform for the sensors discussed are low flying speed minimizing the problem of non-aligned spectral bands within the same cube, but still being sufficient to cover areas significantly larger than affordable for unmanned aerial vehicles with relatively low operation costs. We fully accept that the current findings are much focused on designing the imaging system and processing the images generated by the proposed system, therefore, there are numerous methodological questions requiring further investigation, including the mapping methodologies and the potential improvements that could be brought to them. The work could be extended to exploring other classification algorithms, in particular, deep learning ones, and additional solutions for fusing hyperspectral and colour-infrared images to improve the mapping accuracy. Collecting more ground reference material would enable discussion of other species and other tree characteristics, e.g., the health conditions and development of trees. Finally, conducting a more in depth discussion of the scientific and managerial solutions is needed to make ultra-light aircraft-based hyperspectral imaging truly operational in inventories of urban green spaces or to expand it to other fields of application.

Author Contributions

Conceptualization, G.M. and S.G.; Formal analysis, G.M. and V.J.; Investigation, G.M., V.J., D.J., L.S. and W.O.; Methodology, G.M.; Software, D.J. and W.O.; Writing-original draft, G.M.; Writing-review & editing, V.J., D.J., L.S., S.G. and W.O.

Funding

ANR HYEP (Hyperspectral Imagery for Environmental Planning) supported the contributions of S.G. in the design study (S.G.), methodology (S.G.) and processing made by W.O.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Parameters used with different classification methods in Weka v3.8.2 software.
Naive Bayes:
The number of decimal places to be used for the output of numbers in the model (numDecimalPlaces)—2;
Use a kernel estimator for numeric attributes rather than a normal distribution (useKernelEstimator)—False.
k-nearest neighbours (IBk in Weka):
The number of neighbours to use (KNN)—10;
Identification whether hold-one-out cross-validation will be used to select the best k value between 1 and the value specified as the KNN parameter (crossValidate)—False;
The distance weighting method used (distanceWeighting)—No distance weighting;
Identification whether the mean squared error is used rather than mean absolute error when doing cross-validation for regression problems (meanSquared)—False;
The nearest neighbour search algorithm to use (nearestNeighbourSearchAlgorithm)—EuclideanDistance;
The maximum number of instances allowed in the training pool (windowSize)—0 (i.e., no limit to the number of training instances).
RandomForest:
Size of each bag, as a percentage of the training set size (bagSizePercent)—100;
Identification whether to break ties randomly when several attributes look equally good (breakTiesRandomly)—False;
The maximum depth of the tree (maxDepth)—0 (i.e., unlimited);
The number of randomly chosen attributes (numFeatures)—0 (i.e., int (log_2(#predictors) + 1) is used);
The number of iterations to be performed (numIterations)—100;
The random number of seeds to be used (seed)—1.
Multilayer Perceptron:
The amount the weights are updated (learningRate)—0.3;
Momentum applied to the weights during updating (momentum)—0.2;
The number of epochs to train through (trainingTime)—500;
The percentage size of the validation set (validationSetSize)—0 (i.e., no validation set used, the network was trained for the specified number of epochs);
The normalization of attributes (normalizeAttributes)—True;
The number of seeds used to initialize the random number generator (seed)—0;
The validation threshold to terminate the validation (validationThreshol)—20;
The number of hidden layers of the neural network (hiddenLayers)—estimated as (Number of features + Number of classes)/2.
C 4.5 (J48 in Weka):
Identification whether parts are removed that do not reduce training error (collapseTree)—True;
The confidence factor used for pruning (confidenceFactor)—0.25;
The minimum number of instances per leaf (minNumObj)—2;
The amount of data used for reduced-error pruning (numFolds)—3;
Identification whether reduced-error pruning is used instead of C.4.5 pruning (reducedErrorPruning)—False;
Identification whether to consider the subtree raising operation when pruning (subtreeRaising)—True;
Identification whether pruning is performed (unpruned)—False;
Identification whether counts at leaves are smoothed based on Laplace (useLaplace)—False.

References

  1. Lillesand, T.M.; Kiefer, R.W.; Chipman, J.W. Remote Sensing and Image Interpretation, 6th ed.; Wiley: New York, NY, USA, 2008; 756p. [Google Scholar]
  2. Key, T.; Warner, T.A.; McGraw, J.B.; Fajvan, M.A. A comparison of multispectral and multitemporal information in high spatial resolution imagery for classification of individual tree species in a temperate hardwood forest. Remote Sens. Environ. 2001, 75, 100–112. [Google Scholar] [CrossRef]
  3. European Commission. Green Infrastructure (GI)—Enhancing Europe’s Natural Capital, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions; European Commission: Brussels, Belgium, 2013. [Google Scholar]
  4. Melles, S.; Glenn, S.; Martin, K. Urban bird diversity and landscape complexity: Species-environment associations along a multiscale habitat gradient. Conserv. Ecol. 2003, 7, 5. [Google Scholar] [CrossRef]
  5. Millennium Ecosystem Assessment. Ecosystems and Human Well-Being: Synthesis; Island Press: Washington, DC, USA, 2005. [Google Scholar]
  6. Alvey, A.A. Promoting and preserving biodiversity in the urban forest. Urban For. Urban Green. 2006, 5, 195–201. [Google Scholar] [CrossRef]
  7. Nowak, D.J.; Greenfield, E.J.; Hoehn, R.E.; Lapoint, E. Carbon storage and sequestration by trees in urban and community areas of the United States. Environ. Pollut. 2013, 178, 229–236. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Pasher, M.; McGovern, M.; Khoury, M.; Duff, J. Assessing carbon storage and sequestration by Canada’s urban forests using high resolution Earth observation data. Urban For. Urban Green. 2014, 13, 484–494. [Google Scholar] [CrossRef]
  9. Kowarik, I. On the role of alien species in urban flora and vegetation. In Plant Invasions—General Aspects and Special Problems; Pyšek, P., Prach, K., Rejmanek, M., Wade, M., Eds.; SPB Academic Publishing: Amsterdam, The Netherlands, 1995; pp. 85–103. [Google Scholar]
  10. Song, I.J.; Hong, S.K.; Kim, H.O.; Byun, B.; Gin, Y. The pattern of landscapes patches and invasion of naturalized plants in developed areas of urban Seoul. Landsc. Urban Plan. 2005, 70, 205–219. [Google Scholar] [CrossRef]
  11. Hardin, P.J.; Jensen, R.R. The effect of urban leaf area on summertime urban surface kinetic temperatures: A Terre Haute case study. Urban For. Urban Green. 2007, 6, 63–72. [Google Scholar] [CrossRef]
  12. Roloff, A.; Korn, S.; Gillner, S. The climate-species-matrix to select tree species for urban habitats considering climate change. Urban For. Urban Green. 2009, 8, 295–308. [Google Scholar] [CrossRef]
  13. Kontogianni, A.; Tsitsoni, T.; Goudelis, G. An index based on silvicultural knowledge for tree stability assessment and improved ecological function in urban ecosystems. Ecol. Eng. 2011, 37, 914–991. [Google Scholar] [CrossRef]
  14. Marozas, V.; Cekstere, G.; Laivins, M.; Straigyte, L. Comparison of neophyte communities of Robinia pseudoacacia L. and Acer negundo L. in the eastern Baltic Sea region cities of Riga and Kaunas. Urban For. Urban Green. 2015, 14, 826–834. [Google Scholar] [CrossRef]
  15. Straigytė, L.; Cekstere, G.; Laivins, M.; Marozas, V. The spread, intensity and invasiveness of the Acer negundo in Riga and Kaunas. Dendrobiology 2015, 74, 155–166. [Google Scholar] [CrossRef]
  16. Garcia-Garcia, M.J.; Sánchez-Medina, A.; Alfonso-Corzo, E.; Garcia, C.G. An index to identify suitable species in urban green areas. Urban For. Urban Green. 2016, 16, 43–49. [Google Scholar] [CrossRef]
  17. Straigytė, L.; Vaidelys, T. Inventory of green spaces and woody plants in the urban landscape in Ariogala. South-East Eur. For. 2012, 3, 115–121. [Google Scholar] [CrossRef]
  18. Hostetler, A.E.; Rogan, J.; Martin, D.; DeLauer, V.; O’Neil-Dunne, J. Characterizing tree canopy loss using multi-source GIS data in Central Massachusetts, USA. Remote Sens. Lett. 2013, 4, 1137–1146. [Google Scholar] [CrossRef]
  19. Pu, R.; Landry, S. A comparative analysis of high spatial resolution IKONOS and WorldView-2 imagery for mapping urban tree species. Remote Sens. Environ. 2012, 124, 516–533. [Google Scholar] [CrossRef]
  20. McGee, J.A.; Day, S.D.; Wynne, R.H.; White, M.B. Using geospatial tools to assess the urban tree canopy: Decision support for local governments. J. For. 2012, 110, 275–286. [Google Scholar] [CrossRef]
  21. MacFaden, S.W.; O’Neil-Dunne, J.P.; Royar, A.R.; Lu, J.W.; Rundle, A.G. High resolution tree canopy mapping for New York City using LIDAR and object-based image analysis. J. Appl. Remote Sens. 2012, 6, 063567. [Google Scholar] [CrossRef]
  22. Li, X.; Shao, G. Object-based urban vegetation mapping with high-resolution aerial photography as a single data source. Int. J. Remote Sens. 2013, 34, 771–789. [Google Scholar] [CrossRef]
  23. Tigges, J.; Lakes, T.; Hostert, P. Urban vegetation classification: Benefits of multitemporal RapidEye satellite data. Remote Sens. Environ. 2013, 136, 66–75. [Google Scholar] [CrossRef]
  24. Merry, K.; Bettinger, P.; Siry, J.; Bowker, J.M. Estimating urban forest carbon sequestration potential in the Southern United States using current remote sensing imagery sources. Geogr. Tech. 2015, 10, 78–89. [Google Scholar]
  25. State Forest Service, Miškotvarkos Darbų Vykdymo Instrukcija (Specifications of Forest Management Planning Projects). In Lithuanian. 2010. Available online: https://www.e-tar.lt/portal/lt/legalAct/TAR.44E2BF82EF29/WIPCyylDED (accessed on 25 June 2018).
  26. Im, J.; Jensen, J.R. Hyperspectral remote sensing of vegetation. Geogr. Compass 2008, 2, 1943–1961. [Google Scholar] [CrossRef]
  27. Castro-Esau, K.L.; Sanchez-Azofeifa, G.A.; Caelli, T. Discrimination of lianas and trees with leaf-level hyperspectral data. Remote Sens. Environ. 2004, 90, 353–372. [Google Scholar] [CrossRef] [Green Version]
  28. Vaiphasa, C.; Ongsomwang, S.; Vaiphasa, T.; Skidmore, A.K. Tropical mangrove species discrimination using hyperspectral data: A laboratory study. Estuar. Coast. Shelf Sci. 2005, 65, 371–379. [Google Scholar] [CrossRef]
  29. Van Aardt, J.A.N.; Norris-Rogers, M. Spectral-age interactions in managed, even-aged Eucalyptus plantations: Application of discriminant analysis and classification and regression trees approaches to hyperspectral data. Int. J. Remote Sens. 2008, 29, 1841–1845. [Google Scholar] [CrossRef]
  30. Manevski, K.; Manakos, I.; Petropoulos, G.P.; Kalaitzidis, C. Discrimination of common Mediterranean plant species using field spectroradiometry. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 922–933. [Google Scholar] [CrossRef]
  31. Masaitis, G.; Mozgeris, G. Some peculiarities of laboratory measured hyperspectral reflectance characteristics of Scots pine and Norway spruce needles. In Proceedings of the 18th Annual International Conference Research for Rural Development, Jelgava, Latvia, 16–18 May 2012; pp. 25–32. [Google Scholar]
  32. Masaitis, G.; Mozgeris, G.; Augustaitis, A. Estimating crown defoliation and the chemical constituents in needles of Scots pine (Pinus sylvestris L.) trees by laboratory acquired hyperspectral data. Balt. For. 2014, 20, 314–325. [Google Scholar]
  33. Danusevicius, D.; Masaitis, G.; Mozgeris, G. Visible and near infrared hyperspectral imaging reveals significant differences in needle reflectance among Scots pine provenances. Silvae Genet. 2014, 63, 169–180. [Google Scholar] [CrossRef] [Green Version]
  34. Xiao, Q.; Ustin, S.L.; McPherson, E.G. Using AVIRIS data and multiple-masking techniques to map urban forest tree species. Int. J. Remote Sens. 2004, 25, 5637–5654. [Google Scholar] [CrossRef] [Green Version]
  35. Alonzo, M.; Roth, K.; Roberts, D. Identifying Santa Barbara’s urban tree species from AVIRIS imagery using canonical discriminant analysis. Remote Sens. Lett. 2013, 4, 513–521. [Google Scholar] [CrossRef]
  36. Alonzo, M.; Bookhagen, B.; Roberts, D.A. Urban tree species mapping using hyperspectral and lidar data fusion. Remote Sens. Environ. 2014, 148, 70–83. [Google Scholar] [CrossRef]
  37. Carleer, A.; Wolff, E. Exploitation of very high resolution satellite data for tree species identification. Photogramm. Eng. Remote Sens. 2004, 70, 135–140. [Google Scholar] [CrossRef]
  38. Cho, M.A.; Debba, P.; Mathieu, R.; Naidoo, L.; van Aardt, J.; Asner, G.P. Improving discrimination of savanna tree species through a multiple-endmember spectral angle mapper approach: Canopy-level analysis. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4133–4142. [Google Scholar] [CrossRef]
  39. Raczko, E.; Zagajewski, B. Comparison of support vector machine, random forest and neural network classifiers for tree species classification on airborne hyperspectral APEX images. Eur. J. Remote Sens. 2017, 50, 144–154. [Google Scholar] [CrossRef]
  40. Ouerghemmi, W.; Gadal, S.; Mozgeris, G.; Jonikavičius, D.; Weber, C. Urban Objects Classification by Spectral Library: Feasibility and Applications. Joint Urban Remote Sensing Event (JURSE). 2017. Available online: http://0-ieeexplore-ieee-org.brum.beds.ac.uk/document/7924629/ (accessed on 7 August 2018).
  41. Gougeon, F.A.; Leckie, D.G. The individual tree crown approach applied to IKONOS images of a coniferous plantation area. Photogramm. Eng. Remote Sens. 2006, 72, 1287–1297. [Google Scholar] [CrossRef]
  42. Aksoy, S.; Gökhan, A.; Wassenaar, T. Automatic mapping of linear woody vegetation features in agricultural landscapes using very high resolution imagery. IEEE Trans. Geosci. Remote Sens. 2010, 48, 511–522. [Google Scholar] [CrossRef] [Green Version]
  43. Moskal, L.M.; Styers, D.M.; Halabisky, M. Monitoring urban tree cover using object-based image analysis and public domain remotely sensed data. Remote Sens. 2011, 3, 2243–2262. [Google Scholar] [CrossRef]
  44. Immitzer, M.; Atzberger, C.; Koukal, T. Tree species classification with RandomForest using very high spatial resolution 8-Band WorldView-2 satellite data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef]
  45. Ballanti, L.; Blesius, L.; Hines, E.; Kruse, B. Tree Species Classification Using Hyperspectral Imagery: A Comparison of Two Classifiers. Remote Sens. 2016, 8, 445. [Google Scholar] [CrossRef]
  46. Nevalainen, O.; Honkavaara, E.; Tuominen, S.; Viljanen, N.; Hakala, T.; Yu, X.; Hyyppä, J.; Saari, H.; Pölönen, I.; Imai, N.N.; et al. Individual tree detection and classification with UAV-based photogrammetric point clouds and hyperspectral imaging. Remote Sens. 2017, 9, 185. [Google Scholar] [CrossRef]
  47. Ndikumana, E.; Ho Tong Minh, D.; Baghdadi, N.; Courault, D.; Hossard, L. Deep Recurrent Neural Network for Agricultural Classification using multitemporal SAR Sentinel-1 for Camargue, France. Remote Sens. 2018, 10, 1217. [Google Scholar] [CrossRef]
  48. Li, W.; Fu, H.; Yu, L.; Gong, P.; Feng, D.; Li, C.; Clinton, N. Stacked Autoencoder-Based Deep Learning for Remote-Sensing Image Classification: A Case Study of African Land-Cover Mapping. Int. J. Remote Sens. 2016, 37, 5632–5646. [Google Scholar] [CrossRef]
  49. Maggiori, E.; Tarabalka, Y.; Charpiat, G.; Alliez, P. Convolutional Neural Networks for Large-Scale Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 645–657. [Google Scholar] [CrossRef]
  50. Forzieri, G.; Tanterib, L.; Moser, G.; Catani, F. Mapping natural and urban environments using airborne multi-sensor ADS40–MIVIS–LiDAR synergies. Int. J. Appl. Earth Obs. Geoinf. 2013, 23, 313–323. [Google Scholar] [CrossRef]
  51. Anderson, K.; Gaston, K.J. Lightweight unmanned aerial vehicles will revolutionize spatial ecology. Front. Ecol. Environ. 2013, 11, 138–146. [Google Scholar] [CrossRef] [Green Version]
  52. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  53. Feng, Q.; Liu, J.; Gong, J. UAV remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sens. 2015, 7, 1074–1094. [Google Scholar] [CrossRef]
  54. Lin, Y.; Jiang, M.; Yao, Y.; Zhang, L.; Lin, J. Use of UAV oblique imaging for the detection of individual trees inresidential environments. Urban For. Urban Green. 2015, 14, 404–412. [Google Scholar] [CrossRef]
  55. Honkavaara, E.; Saari, H.; Kaivosoja, J.; Pölönen, I.; Hakala, T.; Litkey, P.; Mäkynen, J.; Pesonen, L. Processing and assessment of spectrometric, stereoscopic imagery collected using a lightweight UAV spectral camera for precision agriculture. Remote Sens. 2013, 5, 5006–5039. [Google Scholar] [CrossRef] [Green Version]
  56. Yang, G.; Li, C.; Wang, Y.; Yuan, H.; Feng, H.; Xu, B.; Yang, X. The DOM Generation and Precise Radiometric Calibration of a UAV-Mounted Miniature Snapshot Hyperspectral Imager. Remote Sens. 2017, 9, 642. [Google Scholar] [CrossRef]
  57. Mozgeris, G.; Augustaitis, A. Estimating crown defoliation of Scots pine (Pinus sylvestris L.) trees using small format digital aerial images. iForest 2013, 6, 15–22. [Google Scholar] [CrossRef]
  58. Mozgeris, G.; Jonikavičius, D.; Jovarauskas, D.; Zinkevičius, R.; Petkevičius, S.; Steponavičius, D. Imaging from manned ultra-light and unmanned aerial vehicles for estimating properties of spring wheat. Precis. Agric. 2018. [Google Scholar] [CrossRef]
  59. EnsoMOSAIC Aerial Mapping System—Overview. Available online: http://mosaicmill.com/cessna_system/em_system.html (accessed on 27 July 2018).
  60. RIKOLA Product Family. Available online: http://senop.fi/optronics-hyperspectral (accessed on 27 July 2018).
  61. NOAA Solar Calculator. Available online: www.esrl.noaa.gov/gmd/grad/solcalc/ (accessed on 27 July 2018).
  62. Urban Atlas. Available online: https://www.eea.europa.eu/data-and-maps/data/urban-atlas (accessed on 7 August 2018).
  63. EnsoMOSAIC Aerial Mapping System—Components. Available online: http://mosaicmill.com/cessna_system/hardware_software.html (accessed on 27 July 2018).
  64. Spatial Information Portal of Lithuania. Available online: http://www.geoportal.lt/geoportal/en/web/en/home (accessed on 27 July 2018).
  65. Matthew, M.W.; Adler-Golden, S.M.; Berk, A.; Richtsmeier, S.C.; Levine, R.Y.; Bernstein, L.S.; Acharya, P.K.; Anderson, G.P.; Felde, G.W.; Hoke, M.P.; et al. Status of atmospheric correction using a MODTRAN4-based algorithm. SPIE Proc. 2000, 4049, 199–207. [Google Scholar] [Green Version]
  66. Green, A.A.; Berman, M.; Switzer, P.; Craig, M.D. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. IEEE Trans. Geosci. Remote Sens. 1988, 26, 65–74. [Google Scholar] [CrossRef] [Green Version]
  67. Kauno Miesto Želdynų Žemėlapis. (Map of Kaunas City Green Areas). Available online: http://maps.kaunas.lt/zeldynai/ (accessed on 27 July 2018). (In Lithuanian).
  68. Weka 3: Data Mining Software in Java. Available online: https://www.cs.waikato.ac.nz/~ml/weka/ (accessed on 27 July 2018).
  69. John, G.H.; Langley, P. Estimating continuous distributions in Bayesian classifiers. In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, Montreal, QC, Canada, 18–20 August 1995; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1995; pp. 338–345. [Google Scholar]
  70. Aha, D.W.; Kibler, D.; Albert, M.K. Instance-based learning algorithms. Mach. Learn. 1991, 6, 37–66. [Google Scholar] [CrossRef] [Green Version]
  71. Breiman, L. Random Forest. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  72. Class MLPClassifier. Available online: http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons/weka/classifiers/functions/MLPClassifier.html (accessed on 27 July 2018).
  73. Quinlan, J.R. C4.5: Programs for Machine Learning; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1993. [Google Scholar]
  74. Landis, J.R.; Koch, G. The measurement of observer agreement for categorical data. Biometrics 1977, 33, 159–174. [Google Scholar] [CrossRef] [PubMed]
  75. Ghiyamata, A.; Shafri, H.Z.M.; Mahdirajic, G.A.; Abdul Rashid, M.; Shariff, A.R.M.; Mansorb, S. Hyperspectral discrimination of tree species with different classifications using single- and multiple-endmember. Int. J. Appl. Earth Obs. Geoinf. 2013, 23, 177–191. [Google Scholar] [CrossRef] [Green Version]
  76. Baldeck, C.A.; Asner, G.P.; Martin, R.E.; Anderson, C.B.; Knapp, D.E.; Kellner, J.R.; Wright, S.J. Operational tree species mapping in a diverse tropical forest with airborne imaging spectroscopy. PLoS ONE 2015, 10, e0118403. [Google Scholar] [CrossRef] [PubMed]
  77. Priedītis, G.; Šmits, I.; Daģis, S.; Paura, L.; Krūmiņš, J.; Dubrovskis, D. Assessment of hyperspectral data analysis methods to classify tree species. Res. Rural Dev. 2015, 2, 7–13. [Google Scholar]
  78. Tagliabue, G.; Panigada, C.; Colombo, R.; Fava, F.; Cilia, C.; Baret, F.; Vreys, K.; Meuleman, K.; Rossini, M. Forest species mapping using airborne hyperspectral APEX data. Misc. Geogr. Reg. Stud. Dev. 2016, 20, 28–33. [Google Scholar] [CrossRef] [Green Version]
  79. Roberts, D.A.; Ustin, S.L.; Ogunjemiyo, S.; Greenberg, J.; Dobrowski, S.Z.; Chen, J.; Hinckley, T.M. Spectral and structural measures of northwest forest vegetation at leaf to landscape scales. Ecosystem 2004, 7, 545–562. [Google Scholar] [CrossRef]
  80. Kuusk, J.; Kuusk, A.; Lang, M.; Kallis, A. Hyperspectral reflectance of boreo-nemoral forests in a dry and normal summer. Int. J. Remote Sens. 2010, 31, 159–175. [Google Scholar] [CrossRef]
  81. Masaitis, G.; Mozgeris, G. The influence of the growing season on the spectral reflectance properties of forest tree species. Res. Rural Dev. 2013, 2, 20–26. [Google Scholar]
  82. Mozgeris, G. Miškotvarkoje naudojamų ortofototransformuotų aerovaizdų dešifravimo požymiai. [Interpretation criteria of orthophotos, used in forest inventory]. Miškininkystė 2004, 1, 49–59. (In Lithuanian) [Google Scholar]
  83. Lee, J.; Cai, X.; Lellmann, J.; Dalponte, M.; Malhi, Y.; Butt, N.; Morecroft, M.; Schönlieb, C.-B.; Coomes, D.A. Individual tree species classification from airborne multi-sensor imagery using robust PCA. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2554–2567. [Google Scholar] [CrossRef]
  84. Ouerghemmi, W.; Gadal, S.; Mozgeris, G.; Jonikavičius, D. Urban vegetation mapping by airborne hyperspetral imagery: Feasability and limitations. In Proceedings of the 9th Workshop on Hyperspectral Image and Signal Processing (WHISPERS), Amsterdam, The Netherlands, 23–26 September 2018. [Google Scholar]
  85. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  86. Murakami, T. Seasonal variation in classification accuracy of forest-cover types examined by a single band or band combinations. J. For. Res. 2004, 3, 211–215. [Google Scholar] [CrossRef]
  87. Burkholder, A. Seasonal Trends in Separability of Leaf Reflectance Spectra for Ailanthus altissima and Four Other Tree Species. Ph.D. Thesis, West Virginia University, Morgantown, WV, USA, 2010. [Google Scholar]
  88. Hesketh, M.; Sánchez-Azofeifa, G.A. The effect of seasonal spectral variation on species classification in the Panamanian tropical forest. Remote Sens. Environ. 2012, 118, 73–82. [Google Scholar] [CrossRef] [Green Version]
  89. Masaitis, G. The Potential of Hyperspectral Imaging to Detect Forest Tree Species and Evaluate Their Condition. Ph.D. Thesis, Aleksandras Stulginskis University, Akademija, Lithuania, 2013. [Google Scholar]
  90. Ciesla, W.M. Remote Sensing in Forest Health Protection; FHTET Report No. 00-03; Forest Health Technology Enterprise Team, USDA Forest Service, Remote Sensing Applications Center: Salt Lake City, UT, USA; Forest Health Technology Enterprise: Fort Collins, CO, USA, 2000; Available online: https://www.fs.fed.us/foresthealth/technology/pdfs/RemoteSensingForestHealth00_03.pdf (accessed on 7 August 2018).
Figure 1. Equipment used for the project: (a) Bekas X32 ultralight aircraft with the originally developed aviation container; and (b) the components of the imaging system.
Figure 1. Equipment used for the project: (a) Bekas X32 ultralight aircraft with the originally developed aviation container; and (b) the components of the imaging system.
Remotesensing 10 01668 g001
Figure 2. Location of the study area and flight strips and locations of tree inventory area (source of background map in reference [62]).
Figure 2. Location of the study area and flight strips and locations of tree inventory area (source of background map in reference [62]).
Remotesensing 10 01668 g002
Figure 3. Spatial pattern of correct and not correct classifications using all extracted spectral features from Nikon colour-infrared images (left) and Rikola hyperspectral images (right) and multilayer perceptron.
Figure 3. Spatial pattern of correct and not correct classifications using all extracted spectral features from Nikon colour-infrared images (left) and Rikola hyperspectral images (right) and multilayer perceptron.
Remotesensing 10 01668 g003
Figure 4. Mean spectra of tested deciduous tree species averaged over all tree crown projections. 64-band Rikola hyperspectral image; the Y-error bars represent the standard error.
Figure 4. Mean spectra of tested deciduous tree species averaged over all tree crown projections. 64-band Rikola hyperspectral image; the Y-error bars represent the standard error.
Remotesensing 10 01668 g004
Figure 5. Examples of mosaics produced using images acquired in 2015: (a) HSI image, pixel size 0.7 m, R—Band 15 (WL852.71 nm), FWHM 14, G—Band 9 (WL703.02 nm), FWHM 9, B—Band 3 (WL553.46 nm), FWHM 9; (b) CIR image, pixel size 0.2 m, R—NIR band (~750–940 nm), G—Red band (~560–760 nm), B—Green band (~470–630 nm); and (c) panchromatically pan-sharpened HSI image using the CIR mosaic as a panchromatic image.
Figure 5. Examples of mosaics produced using images acquired in 2015: (a) HSI image, pixel size 0.7 m, R—Band 15 (WL852.71 nm), FWHM 14, G—Band 9 (WL703.02 nm), FWHM 9, B—Band 3 (WL553.46 nm), FWHM 9; (b) CIR image, pixel size 0.2 m, R—NIR band (~750–940 nm), G—Red band (~560–760 nm), B—Green band (~470–630 nm); and (c) panchromatically pan-sharpened HSI image using the CIR mosaic as a panchromatic image.
Remotesensing 10 01668 g005
Figure 6. Performance characteristics to conduct aerial imaging of 4000 ha reference area using different equipment and flight settings: (a) Aircraft movement during the exposure and image pixels relevant for the HSI sensor; (b) total number of images with 50% side and 70% forward overlaps; (c) total flying time; and (d) total flying costs. Note: A logarithmic scale for the Y-axis is used in (b,c).
Figure 6. Performance characteristics to conduct aerial imaging of 4000 ha reference area using different equipment and flight settings: (a) Aircraft movement during the exposure and image pixels relevant for the HSI sensor; (b) total number of images with 50% side and 70% forward overlaps; (c) total flying time; and (d) total flying costs. Note: A logarithmic scale for the Y-axis is used in (b,c).
Remotesensing 10 01668 g006aRemotesensing 10 01668 g006b
Table 1. Spectral settings: Rikola HSI camera.
Table 1. Spectral settings: Rikola HSI camera.
MissionSetting TypeSetting Values
17 July 2015Central wavelength, nm503.36; 528.29; 553.46; 578.03; 602.93; 628.42; 653.17; 677.93; 703.02; 728.01; 753.40; 778.07; 803.20; 827.90; 852.71; 877.78
17 July 2015Full width at half maximum, nm10.22; 10.34; 8.92; 9.60; 12.03; 11.24; 10.15; 10.73; 9.18; 9.00; 8.94; 8.67; 10.03; 10.06; 14.07; 13.25
11 September 2016Central wavelength, nm503.45; 508.36; 513.89; 520.07; 528.13; 532.48; 538.10; 544.37; 553.18; 556.33; 562.03; 568.38; 574.12; 578.59; 586.29; 592.09; 597.90; 602.43; 610.23; 616.11; 622.00; 628.56; 634.49; 637.80; 653.29; 657.99; 664.05; 670.11; 677.53; 682.26; 688.34; 694.42; 699.84; 703.23; 712.05; 718.16; 724.28; 728.36; 735.85; 741.99; 748.14; 753.60; 760.44; 765.92; 772.09; 778.26; 784.44; 789.94; 796.14; 803.03; 807.85; 814.06; 820.28; 827.89; 832.04; 838.28; 843.50; 852.61; 855.66; 861.76; 867.87; 878.08; 880.13; 892.44
11 September 2016Full width at half maximum, nm12.35; 12.11; 11.86; 11.61; 11.31; 11.16; 10.98; 10.80; 10.58; 10.50; 10.38; 10.25; 10.15; 10.07; 9.94; 9.86; 9.77; 9.71; 9.60; 9.52; 9.43; 9.33; 9.24; 9.18; 9.56; 9.54; 9.52; 9.49; 9.45; 9.42; 9.38; 9.34; 9.30; 9.27; 9.20; 9.15; 9.09; 9.05; 8.99; 8.93; 8.87; 8.82; 8.76; 8.72; 8.67; 8.62; 8.57; 8.53; 8.50; 8.46; 8.44; 8.41; 8.39; 8.37; 8.37; 8.37; 12.93; 12.80; 12.76; 12.68; 12.60; 12.47; 12.45; 12.30
Table 2. Tree species and number of crowns identified in the images.
Table 2. Tree species and number of crowns identified in the images.
English NameScientific NameNumber of Crowns
Image Acquisition in 2015Image Acquisition in 2016
HSICIRHSICIR
Silver birchBetula pendula Roth126119115111
Horse chestnutAesculus hippocastanum L.111979595
Norway mapleAcer platanoides L.150135135137
Box elderAcer negundo L.1071038079
Small-leaved limeTilia cordata Mill.164149156157
Black locustRobinia pseudoacacia L.109958787
Table 3. Classification accuracies and Cohen’s Kappa values achieved using different approaches to classify six urban deciduous tree species.
Table 3. Classification accuracies and Cohen’s Kappa values achieved using different approaches to classify six urban deciduous tree species.
AlgorithmAll Spectral FeaturesCorrelation Based Feature SelectionPrincipal Components
Overall AccuracyKappaOverall AccuracyKappaOverall AccuracyKappa
Mission 2015
Nikon colour-infrared images
Naïve Bayes33.00.1734.10.1832.30.16
k-NN31.10.1728.90.1423.90.10
RandomForest38.80.2534.90.2130.30.15
Multilayer Perceptron49.40.3938.70.2534.00.19
C 4.533.10.1930.00.1526.90.12
Rikola hyperspectral images
Naïve Bayes42.00.3043.70.3241.40.29
k-NN37.00.2340.20.2738.00.25
RandomForest48.50.3738.40.2644.60.33
Multilayer Perceptron55.00.4654.70.4544.60.33
C 4.538.60.2638.10.2537.10.24
Fusing hyperspectral and colour-infrared data
Naïve Bayes42.60.3143.30.3144.10.32
k-NN40.30.2839.90.2740.90.28
RandomForest50.40.4048.60.3850.00.39
Multilayer Perceptron57.80.4955.40.4644.50.33
C 4.541.60.2939.00.2639.40.27
Mission 2016
Nikon colour-infrared images
Naïve Bayes34.80.2037.20.2235.40.18
k-NN33.80.1934.70.2026.60.11
RandomForest43.20.3038.70.2532.10.16
Multilayer Perceptron51.70.4146.80.3535.60.18
C 4.534.20.2031.40.1628.50.12
Rikola hyperspectral images
Naïve Bayes43.10.3142.40.3048.60.37
k-NN42.60.3038.80.2942.60.30
RandomForest50.40.3948.60.3750.40.39
Multilayer Perceptron62.60.5458.70.5047.50.36
C 4.539.10.2635.60.2240.00.27
Fusing hyperspectral and colour-infrared data
Naïve Bayes43.40.3244.50.3350.10.39
k-NN40.60.2840.90.2846.00.34
RandomForest49.30.3747.30.3550.10.39
Multilayer Perceptron62.50.5462.10.5450.00.39
C 4.537.30.2438.30.2539.00.26
Table 4. The significance of the differences in classification accuracy depending on the classification approaches (Z statistics—statistically significant are marked bold).
Table 4. The significance of the differences in classification accuracy depending on the classification approaches (Z statistics—statistically significant are marked bold).
Compared CasesFeature SelectionAlgorithm
Naïve Bayesk-NNRandom ForestMultilayer PerceptronC 4.5
CIR 2015 vs. HSI 2015All spectral features2.881.572.561.451.52
CIR 2016 vs. HSI 2016All spectral features2.602.221.042.843.20
CIR 2015 vs. HSI 2015Correlation-based feature selection3.162.491.064.402.37
CIR 2016 vs. HSI 2016Correlation-based feature selection1.821.242.553.151.21
CIR 2015 vs. HSI 2015Principal components2.904.153.953.132.99
CIR 2016 vs. HSI 2016Principal components3.994.545.053.743.39
Fused CIR/HSI 2015 vs. HSI 2015All spectral features0.160.923.020.740.83
Fused CIR/HSI 2016 vs. HSI 2016All spectral features0.100.280.560.020.48
Fused CIR/HSI 2015 vs. HSI 2015Correlation-based feature selection0.110.102.630.130.27
Fused CIR/HSI 2016 vs. HSI 2016Correlation-based feature selection0.551.580.310.900.80
Fused CIR/HSI 2015 vs. HSI 2015Principal components0.490.791.450.010.63
Fused CIR/HSI 2016 vs. HSI 2016Principal components0.390.920.130.680.26
HSI 2015 vs. HSI 2016All spectral features0.260.640.501.880.19
HSI 2015 vs. HSI 2016Correlation-based feature selection0.300.112.400.940.81
HSI 2015 vs. HSI 2016Principal components1.761.941.430.660.64
Table 5. Species-specific classification accuracies (achieved using multilayer perceptron and all extracted spectral features as the inputs).
Table 5. Species-specific classification accuracies (achieved using multilayer perceptron and all extracted spectral features as the inputs).
Type of ImagesMissionF-Score by Tree Species
Norway MapleSmall-Leaved LimeHorse ChestnutSilver BirchBox ElderBlack LocustAverage
Nikon colour-infrared images20150.6150.5520.4360.4530.3400.4680.478
20160.6210.5420.4970.3790.4330.5440.614
Rikola hyperspectral images20150.6170.5710.5020.5050.5610.5090.544
20160.7160.6740.6170.5110.6330.6330.532
Fused hyperspectral and colour-infrared data20150.6440.5920.5340.5170.6350.5110.572
20160.7240.7030.6100.4870.5390.5910.609

Share and Cite

MDPI and ACS Style

Mozgeris, G.; Juodkienė, V.; Jonikavičius, D.; Straigytė, L.; Gadal, S.; Ouerghemmi, W. Ultra-Light Aircraft-Based Hyperspectral and Colour-Infrared Imaging to Identify Deciduous Tree Species in an Urban Environment. Remote Sens. 2018, 10, 1668. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10101668

AMA Style

Mozgeris G, Juodkienė V, Jonikavičius D, Straigytė L, Gadal S, Ouerghemmi W. Ultra-Light Aircraft-Based Hyperspectral and Colour-Infrared Imaging to Identify Deciduous Tree Species in an Urban Environment. Remote Sensing. 2018; 10(10):1668. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10101668

Chicago/Turabian Style

Mozgeris, Gintautas, Vytautė Juodkienė, Donatas Jonikavičius, Lina Straigytė, Sébastien Gadal, and Walid Ouerghemmi. 2018. "Ultra-Light Aircraft-Based Hyperspectral and Colour-Infrared Imaging to Identify Deciduous Tree Species in an Urban Environment" Remote Sensing 10, no. 10: 1668. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10101668

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop