Next Article in Journal
The Theorized Relationship between Organizational (Non)Compliance with the United Nations Guiding Principles on Human Rights and Desired Employee Workplace Outcomes
Next Article in Special Issue
Carabid Beetle (Coleoptera: Carabidae) Response to Soil Properties of Urban Wastelands in Warsaw, Poland
Previous Article in Journal
Transport Mode and the Value of Accessibility–A Potential Input for Sustainable Investment Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Functional Urban Green Types Using High Resolution Remote Sensing Data

1
Division of Forest, Nature and Landscape, KU Leuven, Celestijnenlaan 200E, 3001 Leuven, Belgium
2
VITO Remote Sensing, Boeretang 200, 2400 Mol, Belgium
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(5), 2144; https://0-doi-org.brum.beds.ac.uk/10.3390/su12052144
Submission received: 11 February 2020 / Revised: 3 March 2020 / Accepted: 7 March 2020 / Published: 10 March 2020

Abstract

:
Urban green spaces are known to provide ample benefits to human society and hence play a vital role in safeguarding the quality of life in our cities. In order to optimize the design and management of green spaces with regard to the provisioning of these ecosystem services, there is a clear need for uniform and spatially explicit datasets on the existing urban green infrastructure. Current mapping approaches, however, largely focus on large land use units (e.g., park, garden), or broad land cover classes (e.g., tree, grass), not providing sufficient thematic detail to model urban ecosystem service supply. We therefore proposed a functional urban green typology and explored the potential of both passive (2 m-hyperspectral and 0.5 m-multispectral optical imagery) and active (airborne LiDAR) remote sensing technology for mapping the proposed types using object-based image analysis and machine learning. Airborne LiDAR data was found to be the most valuable dataset overall, while fusion with hyperspectral data was essential for mapping the most detailed classes. High spectral similarities, along with adjacency and shadow effects still caused severe confusion, resulting in class-wise accuracies <50% for some detailed functional types. Further research should focus on the use of multi-temporal image analysis to fully unlock the potential of remote sensing data for detailed urban green mapping.

1. Introduction

Worldwide, urban areas are faced with major challenges imposed by rapid urbanization trends and the increased occurrence of extreme weather events due to climate change [1]. In order to safeguard the quality of urban life, cities need to be designed and managed in a smart and (more) sustainable way. In this respect, urban green represents an important tool due to the many ecosystem services (i.e., direct or indirect benefits to human society [2]) it may provide, including provisioning (e.g., food production), regulating (e.g., mitigating urban heat waves, floods and air pollution), cultural (e.g., recreation) and supporting (e.g., biodiversity, pollination) services [3,4]. Quantifying ecosystem services provided by urban green in a spatially explicit way, or the production of ecosystem service maps, has been proposed as a valuable tool in support of sustainable urban planning, development and policy making [4,5,6,7]. Indeed, such maps could be used to identify problematic urban zones featuring a lack of a particular or multiple ecosystem services which should subsequently be prioritized in urban development plans [8,9,10]. Moreover, by generating ecosystem service maps for different urban planning scenarios, informed and sustainably sound decisions can be made to ensure high environmental quality in our future cities [11,12]. Lastly, due to the ever growing knowledge on the link between particular plant traits and ecosystem services, such maps can assist urban green managers to design green spaces which are not only aesthetically appealing, but which also maximize the diversity and magnitude of ecosystem services provided [13].
A frequently used way of mapping ecosystem services, referred to as the value-transfer approach, relies on the combination of a land cover map and a pre-defined scoring table in which each of the land cover classes is assigned an ecosystem service score, which in itself can consist of a simple ranking or a more advanced quantitative score [8,14,15,16,17]. Most of these ecosystem service mapping efforts however focus on broad land cover classes (in ecosystem services literature referred to as service providing units, e.g. forest, wetland, garden, park, allotment, agricultural land [5]), failing to capture the important effects of the specific type, properties and context of urban green on ecosystem service provisioning [5,15,18,19]. On the other hand, several more detailed typologies of urban green have been suggested, each designed for a specific application, e.g., biodiversity monitoring [20,21], land use [22,23], urban climate [24], urban hydrology and cooling [15] and management of public urban green (e.g. urban green administration of the city of Brussels, oral communication). In order to get an integrated yet detailed view on ecosystem services provided by urban green, we suggest the construction of a functional urban green typology, i.e., a typology solely based on the main functions and services of urban green and taking into account vegetation type, relevant properties and contextual information.
Aside from a functional urban green typology, an operational mapping workflow is required to effectively monitor these detailed urban green types at a city-wide scale. In this paper, rather than relying on labor- and time-consuming field inventories, we explore the potential of remote sensing data acquired from airplanes and satellites, in combination with state-of-the-art image processing techniques, for mapping of functional urban green types. In particular, our main focus is on the use of optical remote sensing data, measuring the reflectance of solar light on the earth’s surface within the visible (VIS; 0.4–0.7 µm), near-infrared (NIR; 0.7–1.25 µm) and short-wave infrared (SWIR; 1.25–2.5 µm) domains of the electromagnetic spectrum (Figure 1). As each object interacts differently with different parts of this spectrum, the reflected signal can be used as a basis for (urban) land cover mapping (Figure 1, [25]). Due to the very subtle differences in spectral reflectance between different vegetation types and between individual species, detailed vegetation mapping generally requires the use of hyperspectral sensors (in which reflectance is measured in high detail using many, narrow and contiguous spectral bands; Figure 1 [26,27,28,29,30,31]). Although the technological advances and number of applications for hyperspectral sensors on board of satellites [32] and UAVs (unmanned aerial vehicles, or drones [33]) are slowly increasing, most hyperspectral imagery today is captured using airplanes, generating detailed imagery with a spatial resolution (pixel size) of 2–15 m. Due to the high spatial complexity and heterogeneity of urban areas however, airborne hyperspectral data is typically characterized by a high share of mixed image pixels, i.e., pixels containing more than one land cover class, in turn severely complicating further analysis [34,35]. Despite a growing number of subpixel mapping approaches, e.g., [36,37], detailed urban green mapping remains challenging due to the high spectral similarity amongst individual urban green types [27].
Over time, many approaches have been suggested to allow for more detailed urban mapping. Firstly, fusion of spectral data with LiDAR (Light Detection and Ranging) data has been successfully applied in urban areas for land cover mapping [38,39,40,41,42], tree species classification [43], urban green mapping [44], detection of invasive shrub species [45] and tree health estimation [46] due to the high complementarity between spectral data and spatially very detailed structural information derived from 3D LiDAR data. Secondly, hierarchical (or stratified) classification approaches (i.e., classification done at multiple thematic levels, where each level is used as a constraint to map the next, more detailed level) have been shown to increase the mapping accuracy of detailed land cover classes [47,48,49]. Thirdly, Object-Based Image Analysis (OBIA), in which similar pixels are grouped into homogeneous image objects prior to classification, represents another promising technique [50]. By using objects rather than pixels, additional information (i.e., size, shape and internal variability of image objects) becomes available to the classification algorithm. Although commonly being applied on (airborne) hyperspectral data [51,52,53,54], the added value of OBIA becomes most apparent when applied on high spatial resolution multispectral data, allowing distinction between detailed land cover classes based on limited spectral information [55,56,57,58,59]. Here, we will further explore these three analysis techniques, specifically for detailed urban green mapping.
In summary, the overall goal of this study is to develop a framework (typology) and associated workflow based on remote sensing data for accurate mapping of functional urban green types. By assessing the potential of various remote sensing data sources, i.e., airborne hyperspectral data, high-resolution multispectral satellite data and airborne LiDAR data (and different combinations thereof), we additionally aim for increased insight into the most relevant input data to be used for detailed urban green mapping.

2. Materials and Methods

2.1. Functional Urban Green Typology

Urban green can be studied on many different scales, ranging from parcel level (park, garden) to the individual plant scale [6]. As parks and gardens may provide entirely different services depending on their composition (e.g., a lawn mainly serves as a playground for kids, whereas a botanical garden is more interesting from an ecological, educational and scientific point of view [18]) and individual plants may serve other purposes depending on their context (e.g., a row of street trees as part of an ecological network, versus a solitary tree for ornamental purposes), we decided to focus on urban green elements as the main unit of our typology. An urban green element is defined here as an assemblage of individual plants together providing similar functions and services.
Based on a literature review, combined with in-house expert knowledge, we identified key plant properties affecting ecosystem service provisioning, including all four ecosystem service categories (provisioning, regulating, cultural and supporting services [2]). Using these insights, we categorized urban green elements into a total of 23 functional urban green types and provided a qualitative score on the contribution of each type to the most relevant urban ecosystem services (Table 1). Functional urban green types were categorized into three main categories, i.e., tree, shrub and herbaceous plants. Due to their large size and leaf area compared to other urban green elements, trees are known to excel at providing regulating ecosystem services [60,61,62]. Further distinction into multiple tree functional types was therefore mainly based on their production potential (food and woody biomass), cultural benefits (potential for recreation and aesthetic value), internal biodiversity and their potential to support more biodiversity. The extent (surface area), structural diversity, spatial configuration (shape, area/edge length, connectivity) and management (frequency of harvesting and human disturbance) were identified as the main factors affecting the provisioning of these specific ecosystem services by urban trees [63,64,65,66]. Based on these characteristics, eight tree functional types were defined, ranging from (semi-)natural forests to individual isolated trees (Table 1). Precise definitions of these functional urban green types were based on expert knowledge and local good practices and guidelines for urban green management. Specifically in terms of regulating services, leaf phenology (evergreen/deciduous), leaf type (broadleaf/coniferous) and tree size were found to be crucial factors [60,61,62,67]. Although not explicitly included in our typology, we highly recommend these tree characteristics to be used as supplementary information to further refine any ecosystem service assessment of urban trees.
Due to a general lack of scientific literature specifically focusing on ecosystem service provisioning by shrubs in an urban context, largely the same reasoning as used for trees was applied. Three functional types were defined based on a combination of extent and spatial configuration (Table 1). Due to their size and compact shape, large scrub patches can significantly contribute to regulating ecosystem services and provide valuable habitats for various animal species. Hedges are a specific type commonly used in urban areas as noise and privacy barriers, but at the same time present habitat opportunities for smaller animals, both vertebrates and invertebrates [68]. Finally, individual or small groups of shrubs, mainly planted for ornamental purposes in parks and gardens, were treated as a separate functional type. As for trees, leaf phenology and leaf type constitute important complementary information for detailed assessment of ecosystem services.
From a functional perspective, herbaceous plants are significantly different from trees and shrubs. Due to their relatively small size, their contribution to regulating services is rather limited [69]. Herbaceous and woody vegetation closely associated with buildings (allowed to grow closely against building façades or on roofs) however represents a notable exception to this general rule of thumb. Many previous studies have shown the regulating benefits of these urban green types, which mainly relate to stormwater management, water purification, improved insulation of buildings and mitigating air pollution [70,71,72,73,74,75,76]. Therefore, both façade vegetation and green roofs have been identified as separate functional urban green types (Table 1). Additional distinction among herbaceous urban green types was based on their food and biomass production potential (in turn determined by human management and plant characteristics such as presence of edible plant parts, plant height and growth rate) and their internal plant composition (flowering versus grass plants) and diversity. The latter two characteristics both affect to a large extent their visual appeal [77] and potential to support biodiversity and pollination [78]. A total of twelve functional types dominated by herbaceous plants were defined (Table 1).Food crops, which are gaining more attention in urban areas [79,80], were divided into large-scale agricultural fields and small-scale allotment gardens. The latter are characterized by higher structural and plant diversity, in turn contributing to various other ecosystem services [81]. Grass-dominated types (including lawns, pastures and meadows) were subdivided based on their internal biodiversity and human use, whereas further distinction within flowering plants was made based on size (tall versus low herbs) and degree of human interference (semi-natural flower fields and water plants versus intensively managed flower beds).

2.2. Mapping Functional Urban Green Types Using Remote Sensing

2.2.1. Study Area, Selection of Functional Urban Green Types and General Classification Approach

The Brussels Capital Region is defined as an administrative region consisting of the city of Brussels together with 18 surrounding municipalities. This region is among the most densely built and intensely used areas for residential, commercial and industrial purposes in Europe [82]. Nevertheless, its total area of green space has been estimated at 8714 ha, or 54% of its total area [83], of which roughly 20% is privately owned [84]. Most urban green is located near the edges of the Capital Region (30%–70% urban green cover), whereas the dense city center only contains around 10% of green space [84]. The exact extent of our study area was dictated by the availability of airborne remote sensing data used in this study and is situated in the eastern part of the Capital Region (Figure 2). This particular area comprises a large diversity of urban structure types, including dense residential zones in the west, sparse residential zones in the east and south and industrial/commercial and more rural areas in the north. Some urban green types defined in our functional typology were not considered in the remainder of this study, either because of their intrinsic dimensions making them nearly impossible to detect using remotely sensed data sources, or because of their limited occurrence within our study area (see greyed-out entries in Table 1).
As can be seen from Table 1, our functional urban green types were defined both in function of plant type (e.g., deciduous tree versus tall herbaceous vegetation) and spatial configuration (e.g., tree row versus solitary tree). Therefore, we opted here for a two-stage classification approach. In a first stage, the different plant types present within the functional urban typology were identified. A hierarchical classification scheme was defined (Table 3). Aside from the usual non-vegetation classes regularly included in urban land cover studies (roofs, pavement, soil and water [37]), cars were explicitly treated separately due to their high abundance and confusion with shrubs. For this first stage, the potential of different datasets and classification approaches was investigated. Classification results were then used to serve as building blocks in a second, rule-based classification approach to make a distinction between patches, rows and individual trees and shrubs. The reader is referred to Section 2.2.4 for a detailed explanation on the classification approach.

2.2.2. Remote Sensing Data

Airborne hyperspectral data was acquired using the APEX sensor on June 30, 2015. The sensor was operated at a flying altitude of 3600 m a.s.l. which resulted in imagery featuring a spatial resolution of 2 m. The APEX sensor covers the spectral range of 400–2500 nm. After removal of water absorption bands, 218 spectral bands remained for further analysis. More information on image pre-processing can be found in [85]. Airborne LiDAR data was collected around the same time in Summer 2015 by Aerodata Surveys Nederland BV. The resulting LiDAR point cloud data featured an average resolution of 15 pts/m². Finally, a Worldview-2 image covering the entire Brussels Capital Region and captured on July 24, 2016 was put at our disposal by Brussels Environmental Agency (BIM). Worldview-2 consists of eight spectral bands covering the spectral range between 400 and 1050 nm. The raw image data was atmospherically corrected using ATCOR and orthorectified using a 25 cm digital terrain model in ERDAS Imagine software. Finally, the spectral bands were pan-sharpened in ENVI 5.2 software (Harris Geospatial Solutions) resulting in a pixel size of 0.5 m.

2.2.3. Training and Validation Data on Urban Composition and Functional Urban Green Types

Twenty 100 × 100 m validation blocks were delineated using a stratified random sampling approach throughout the study area, thereby ensuring different urban structure types (dense and sparse residential, industrial/commercial and urban green zones) to be sufficiently represented (Figure 2). Due to privacy and accessibility issues, privately owned green areas were avoided as much as possible. Within these validation blocks, land cover and functional urban green types (according to the typology defined in Table 1) were manually mapped during a field visit, visually aided by a 7.5 cm resolution RGB orthophoto acquired in winter 2014. After digitization, a random subsample of objects was selected within each block to serve as training data.
In addition to the validation blocks, fifteen additional blocks were delineated throughout the study area, ranging in size from 1.7 to 38.6 ha, to further complete our dataset of training objects. Rather than mapping land cover in a spatially continuous way as was done for the validation blocks, points were digitally drawn in these areas and labeled based on the same RGB orthophoto and Google Street View. Drawing of points was done with special attention to those land cover classes and urban green types which were underrepresented in the dataset composed by the validation blocks. Table A1 summarizes the sample size of the training dataset per land cover class and compares these to the relative abundances of the classes in the validation blocks.

2.2.4. Detailed Classification Approach

In this study we explored the potential of combining hyper- or multispectral data with structural information derived from airborne LiDAR data in an object-based classification approach to produce a detailed land cover map with particular focus on functional urban green types. In essence, we first identified the most useful features for detailed urban green mapping by training several Random Forest models with varying sets of input data. Secondly, we applied the best performing model to our twenty validation blocks to assess its potential to generate spatially continuous land cover maps. Finally, some additional, rule-based classification steps were performed to enhance the final product. Our detailed workflow is essentially comprised of seven parts (Figure 3), described in more detail in the sections below.

2.3. Calculation of Spectral and Structural Features

Hyperspectral datasets typically contain more than 200 spectral bands, often showing high mutual correlations and hence unnecessarily slowing down processing times. Here, we wanted to test whether this information could be summarized without affecting classification accuracy. Two common ways to summarize these data are (1) deriving spectral indices (i.e., ratios of spectral bands known to correlate with the occurrence or specific property of a particular land cover class) and (2) data transformation specifically aiming at reducing data dimensionality while retaining maximum information content. In this study we calculated a set of eight spectral indices thought to be relevant for urban land cover mapping, i.e., Normalized Difference Vegetation Index (NDVI [86]), Normalized Difference Water Index [87,88,89], a grass index highlighting the difference between trees and lawn [46], red/green ratio, blue/green ratio and overall brightness (defined as the mean value of all spectral bands). Moreover, we applied a forward Minimum Noise Fraction transformation (MNF [90]) on the APEX bands and retained the first 30 bands based on visual inspection of the resulting eigenvalues. As Worldview-2 data only consists of eight spectral bands, the effect of data reduction was not tested for this dataset. Only NDVI was calculated given its expected relevance for land cover mapping.
The 3D LiDAR point cloud data was converted into a set of 2D features potentially useful for land cover and urban green classification. Aside from height above ground level (normalized digital surface model; nDSM) and intensity, which represent the most frequently used LiDAR features in land cover classification [91], an additional feature related to the permeability of objects was adopted from [92]. This feature, termed treeIndex here, is based on the difference between first and last LiDAR returns and facilitates the differentiation between trees and buildings [46]. All of these features were computed using OPALS software at a resolution of 25 cm, capped off at certain thresholds to remove outliers (i.e., any value above the threshold is set to the threshold value) and scaled between 0 and 1. Two versions of nDSM were created using two different capping thresholds of respectively 15 (nDSM1) and 3 m (nDSM2). Whereas nDSM1 more relates to the actual height of the objects, nDSM2 specifically highlights small height variations, thereby increasing the detection rate of low objects (e.g., hedges, low shrub; Figure 4). Capping thresholds for intensity and treeIndex amounted to 500 and 3 m respectively.
Aside from the actual height and brightness of objects, the internal variation of both features within an object also show potential for classification purposes (e.g., height of a building is more homogeneous compared to height of a single tree’s canopy, small height variations of natural grasslands compared to lawns). In image analysis techniques, these features are referred to as image texture. Here, we calculated four textural features (entropy, sum entropy, variance and sum variance) based on three different LiDAR features (nDSM1, nDSM2 and intensity).

2.4. Creation of Image Objects—Image Segmentation

Image segmentation, or the process of combining image pixels to create relatively homogeneous, non-overlapping image objects, lies at the foundation of object-based image analysis approaches. In order to enhance the detection of small objects, segmentation in this study was only based on LiDAR features (nDSM1, nDSM2 and intensity), i.e., the dataset featuring the highest spatial resolution. We adopted the segmentation workflow proposed by [55], which is based on the i.segment algorithm in GRASS GIS. The algorithm’s parameters were set for each training and validation block separately using the Unsupervised Segmentation Parameter Optimization method [93]. An example of segmentation inputs and resulting output is provided in Figure 4.

2.5. Extraction of Training and Validation Object Features

For each image object, the mean and standard deviation of all spectral, structural and textural features were calculated and extracted using the i.segment.stats algorithm in GRASS GIS [55]. Additionally, geometrical features related to the object size and shape were also calculated, i.e., area, perimeter and compactness.

2.6. Identifying Most Suitable Image Features for Plant Type Classification through Random Forest Models

Random forest (RF) is a machine learning approach increasingly being used in remote sensing applications due to its relative high accuracy and computational efficiency compared to other frequently used machine learning approaches [94]. Since our goal was to compare the potential of different image datasets to label individual image objects, multiple RF models were trained, each based on a distinctive set of object features (Table A2). In addition, we also tested the benefits of a hierarchical classification approach, in which separate RF models were constructed and combined for subsequently differentiating vegetation from non-vegetation, woody from non-woody vegetation and more detailed urban green types. We split our training dataset (containing a total of 2543 objects) into training (70%) and testing (30%) objects according to a stratified random selection procedure (see also Table A1). Training was done using the default value of 500 trees, a random selection of ten values for the mtry hyperparameter and a 10 times repeated 5-fold cross validation approach (based on [55]). The best RF model was selected based on the total and class-based accuracies acquired for the independent test set. Variable importance of individual input features was assessed by means of the mean decrease in prediction error after permuting each predictor variable (default in R caret package).

2.7. Application of the Best Model, Post-classification Procedure and Accuracy Assessment

Being able to correctly classify homogeneous objects is a prerequisite to, but does not suffice for, the production of spatially continuous classification maps. The best performing RF model (cf. previous section) was therefore applied to the entire set of validation blocks to create spatially continuous classification maps. These results were critically evaluated on a visual basis and some re-classification rules were defined using eCognition software to correct for the most obvious errors (as was also done by e.g., [57]; see Table A4 for more details). Final classification accuracies were determined by calculating a confusion matrix and associated accuracy statistics (caret package, R software). Due to the unbalanced validation dataset used in this study, balanced accuracy (scaling between 0 and 1) was selected to describe model performance for individual classes.

2.8. Spatial Configuration of Trees and Shrubs

After obtaining a detailed plant type map, further distinction between detailed tree and shrub functional types based on spatial configuration (e.g., tree rows versus solitary trees) was accomplished through an additional rule-based classification procedure, which is described in detail in Table 2. In essence, individual trees and shrub objects were merged together, after which the resulting objects were classified based on their size, shape and distance to other trees/shrubs.

3. Results

3.1. Potential of Remote Sensing Data for Differentiating Functional Urban Green Types

As stated in Section 2.2.4, multiple Random Forest models were constructed in order to assess the potential of different image datasets for distinguishing functional urban green types. When targeting basic urban green classes, high class-wise accuracies (>0.8) were attained, irrespective of the image datasets being used, except for the soil (0.50–0.72) and agriculture (0.59–0.76) classes (Table 3a). Most of the classes could be mapped with adequate accuracy using just LiDAR data. Agriculture, extensive green roofs, soil and water were better discriminated upon adding spectral information, with hyperspectral data contributing more (respective increase by 21%, 18%, 22% and 18%) compared to multispectral data (14%, 7%, 6% and 0%). The approach of creating multiple models in a hierarchical classification approach only slightly benefited the classification accuracy of basic vegetation types (average increase of 1%), but had a clear positive effect for the soil (6%) and water (19%) classes. When considering detailed urban green classes, the differences in performance between the different image datasets became more pronounced. Total accuracy increased by respectively 3% and 8% upon adding multispectral and hyperspectral data to LiDAR data, with maximum increases per class for the latter amounting to 34% (evergreen coniferous tree) and 38% (arable land) (Table 3b). Despite the availability of detailed hyperspectral and LiDAR data, several detailed urban green types remained hard to distinguish (evergreen coniferous and broadleaf shrub, flower bed and vegetable garden featured maximum accuracies below 0.80). Whereas the non-hierarchical approach mostly favored the most abundant land cover classes in our dataset (e.g., lawn), the hierarchical approach resulted in a considerable improvement for some of the more uncommon classes (e.g., flower bed by 21%; evergreen coniferous tree and arable land both by 7%).
Regarding the specific object features to be used as input to the model, we observed that the 30 MNF bands derived from APEX data consistently outperformed the use of spectral indices based on the same data, as well as the 218 original APEX bands, except for mutually distinguishing the non-vegetation classes (see Appendix A, Table A3). The addition of LiDAR features consistently increased the classification accuracy, most notably in case of the more detailed urban green classes. Including textural and geometrical features on the other hand only increased model performance in some instances and only to a very limited extent. Based on the top five ranking of feature importance within the best performing Random Forest models, the most valuable object features for classification included nDSM1, nDSM2, treeIndex, APEX MNF band 2, LiDAR intensity, APEX MNF band 6, texture of nDSM2, followed by more APEX MNF bands.

3.2. Producing a Functional Urban Green Map

Based on the outcomes presented in Table 3, the hierarchical model using hyperspectral and LiDAR data was applied to all validation blocks to generate spatially continuous classification maps, one for each block. Overall accuracy of these initial maps was good, i.e., 0.86 for basic vegetation classes (Table 4a) and 0.84 for detailed classes (Table 4b), but mainly driven by the high coverage of relatively easily distinguishable classes like buildings, pavement, deciduous broadleaf trees and lawn. Amongst the basic classes, lowest class-wise accuracies were found for shrub (0.55), herbaceous vegetation (0.48) and soil (0.55). Aside from high mutual confusion between these classes, pavement and lawn turned out to be major sources of classification error for all three classes. With regard to the more detailed vegetation classes (Table 4b), evergreen coniferous trees were frequently classified as deciduous broadleaf trees, whereas detailed shrub and herbaceous vegetation classes featured even lower class-wise accuracies below 0.5 (Table 3b). Main sources of confusion for shrubs included broadleaf deciduous trees and mutual confusion between the three shrub classes, while more than half of the pixels labeled as either tall herb or flower beds were wrongly classified as meadows.
The Random Forest model not only produces a final classification label per image object, but also provides an indication of uncertainty by means of estimated class membership probabilities. By applying a simple threshold of < 0.7 to these probabilities, we explicitly mapped the location of objects being classified with high uncertainty (Figure 5; threshold chosen based on visual inspection of results). Aside from the confusion between detailed vegetation classes mentioned earlier, high classification uncertainty was primarily found near object borders (e.g., building edges classified as trees), in transition zones between two land covers (e.g., narrow pavement next to lawns modelled as lawns) and in shadowed areas (e.g., shadowed pavement classified as water or vegetation). These zones made up 17% and 21% of the total area to be classified respectively for the basic and detailed classification (Table 4). Discarding these uncertain zones from the accuracy assessment indeed considerably boosted classification performance up to 0.94 overall accuracy for both basic and detailed classification. Still, class-wise accuracies for detailed herbaceous vegetation classes (tall herb, flower bed and meadow), deciduous broadleaf shrub and soil remained rather low (≤0.75), indicating severe confusion between these particular classes.
Instead of merely discarding these uncertain areas, we developed a rule-based post-classification procedure (Table A4), specifically aiming to reduce errors in zones affected by adjacency and/or shadow effects. Due to its high spatial detail and being an active remote sensing technology, LiDAR data is inherently less prone to these disturbing effects and was hence mainly used during these post-classification corrections. A notable exception includes the water class, for which we used another water index, specifically designed to reduce the confusion with built-up surfaces in urban areas [95]. Although the net effect on overall classification accuracy was small, the proposed algorithm did increase the performance for all basic vegetation classes (mainly lawn) and water, reduced the accuracies for pavement and soil classes (Table 4a), but, more importantly, produced a classification map that visually made more sense (Figure 5 and Figure 6). In particular, the detection of building edges was improved, thereby reducing confusion between roofs and trees, whereas pavement and soil were less frequently misclassified as low vegetation or water. Aside from reduced confusion between trees and shrubs, the post-classification procedure however did not enhance distinction between detailed urban green types (see Table A5, Table A6, Table A7 and Table A8 for a comparison between confusion matrices prior to and after post-classification correction).
Finally, tree and shrubs were further classified based on their spatial configuration (cf. Table 2). As can be seen visually in Figure 7a, this simple procedure worked well for the distinction between narrow hedges and larger groups and patches of shrubs. Detection of tree rows on the other hand was not always successful, particularly in case the tree row directly interacted with a neighboring tree patch or when the crowns of individual trees within the row were not overlapping (Figure 7b,c).

4. Discussion

4.1. Potential Applications of the Proposed Functional Urban Green Typology

Earlier research has already pointed to the need for uniform and spatially explicit datasets on urban green infrastructure within and across cities, in order to optimize the design and management of urban green spaces with regard to the provisioning of ecosystem services [96]. The functional urban green typology proposed here may act as a stepping stone towards accomplishing this goal. More specifically, the 23 functional urban green types can, in the first place, be used as a universal mapping framework to generate a detailed, spatially explicit view on urban ecosystem services through a value-transfer approach (cf. Section 1). Aside from a mapping methodology, which was the focus of the current paper, this approach requires a detailed ecosystem service scoring table indicating the relevance of the different urban green types for various ecosystem services. Whereas Table 1 already provides a qualitative starting point in this respect, more detailed quantitative ecosystem service scores would fully enable the use of the functional urban green typology in this sense. Some efforts have already been done to summarize the vast amount of scientific knowledge and empirical evidence on the link between urban green and ecosystem services. Derkzen, Van Teeffelen and Verburg [8] for instance published a list of six different ecosystem service indicator scores for seven, broad urban green types and used these to evaluate ecosystem services in Rotterdam (The Netherlands). Farrugia, Hudson and McCulloch [15] specifically focused on flood control and cooling and provided three related indicator values for 22 detailed urban green types. In 2015, the Flemish institute for technological research (VITO) published a report (in Dutch) on the valuation of ecosystem services in urban areas, including qualitative and quantitative ecosystem service scores covering eight ecosystem services and 43 urban green types based on an intensive literature review [97]. In turn, this report has been used as the basis for the Nature Value Explorer, an online tool allowing to calculate the implications of different (urban) planning scenarios on the provisioning of ecosystem services [98], and for the “Groentool”, another online tool designed for the city of Antwerp (Belgium) allowing to visualize the impact of different urban green scenarios on various ecosystem services [99].
Aside from a more holistic view on ecosystem services, the proposed typology may provide a solid framework to quantify particular urban ecosystem services in a detailed way using dedicated biophysical models, e.g., UrbClim [100] for urban heat fluxes and WetSpa [101] for urban water flows. Due to its intrinsic focus on ecosystem services, our functional typology provides more relevant classes compared to standard urban land cover products most commonly used as a basis for such models and most frequently generated by the urban remote sensing community, e.g., [36,37]. As a consequence, these models should be adapted to deal with the high thematic detail of the proposed typology. As different urban green characteristics might be relevant for different individual ecosystem services (e.g., leaf phenology for urban water and species information for ecological functions), the construction of a manageable typology that can directly be used to map each individual ecosystem service would be wildly impractical. Therefore, we would like to stress that the proposed urban green typology should be regarded as a flexible framework, which can be extended by additional information derived either from remote sensing (e.g., leaf phenology using multiple images acquired in different seasons [102]), additional spatial analysis (e.g., landscape connectivity [103]) or field inventories (e.g., detailed information on species or management practices) to meet the needs of the specific ecosystem service under consideration.
Although ecosystem services have been the main motivation behind our work and constitute the basis for the resulting functional green typology, this typology, together with the associated mapping workflow, could be adopted to serve many more applications. Indeed, such a spatially-explicit and detailed characterization of urban green represents essential information for urban green managers, allowing them to optimize their management activities across a city. Urban ecologists and environmentalists can use these detailed thematic maps to study interactions between the occurrence of certain urban green elements and the presence, abundance and reproduction potential of animal species, as well as several indicators for environmental quality (e.g., ambient temperature, air and soil pollution). This in turn will provide further insights into specific functions and services delivered by these urban green types. Finally, detailed maps on the composition of urban green within a city can provide valuable information to urban policymakers and planners on the current state, future priorities and desirable action points regarding urban green.

4.2. Mapping Functional Urban Green Types Using Remote Sensing Data

In the past, LiDAR data has been successfully applied to improve urban and/or vegetation classification performance by simply adding these data as a complementary data source to various approaches, by themselves mainly based on spectral information [40,41,42,51,52,54]. The results in this study however suggest that LiDAR data should take up a much more central role in detailed urban classification efforts. Not only does its high spatial resolution allows for detailed image segmentation (Figure 4), the various structural, spectral and textural features derived from LiDAR data were also found to be the most important classification features overall. This is in line with a study by Chen, Du, Wu, et al. [39], which concluded that height-related LiDAR features were more important compared to spectral features for urban land cover mapping. Whereas basic land cover classes could be readily differentiated using only LiDAR data, the added value of spectral data, and particularly of hyperspectral data, increased significantly when considering thematically more detailed urban (green) classes (Table 3). Conceptually, this can be explained by the higher degree of complementarity between the high spatial detail of LiDAR data on the one hand and the higher spectral information content in hyperspectral compared to multispectral data on the other hand, especially given the subtle spectral differences between different urban green types [27]. Likewise, the combination of hyperspectral data and LiDAR features was found to outperform combined multispectral and LiDAR data for detailed habitat mapping in Cumbria, UK [52]. New innovative ways are arising for combining hyperspectral and LiDAR data in OBIA approaches (e.g., the concept of 3D hyperspectral point clouds [104]), opening up new and exciting possibilities for further research in this respect.
Both the adoption of a hierarchical classification approach and the application of dimensionality reduction techniques (in this case MNF) on the hyperspectral dataset improved classification accuracies for detailed urban green types (Table A3). These results agree with earlier findings regarding hierarchical classification of urban green [49] and relating to the added value of dimensionality reduction techniques for detailed vegetation classification [29,105]. Despite the use of spatially and spectrally detailed data sources and advanced analysis techniques (i.e., OBIA and Random Forest classification), uncertainties of detailed urban green types still remained high, particularly for shrub and herbaceous vegetation types (Table 3). Likewise, Mathieu, Aryal and Chong [21] reported only moderate accuracies of 63% up to 77% for detailed urban green mapping in the city of Dunedin, New Zealand, based on multispectral IKONOS imagery and OBIA techniques. Rather than merely using imagery acquired in summer (when the vegetation season is at its peak and all vegetation types appear green), as was done here, we highly suggest to further explore the potential of multi-temporal data for mapping these urban green types. As such, information regarding plant phenology can be integrated into the classification workflow, which is expected to benefit the distinction between evergreen and deciduous tree/shrub types and even individual species [30,106,107], between different herbaceous vegetation types [108] and between semi-natural versus agricultural land [109]. Yan, Zhou, Han, et al. [102] for instance found that phenology increased classification accuracies of broad urban green types by 10% to 13% when using an OBIA approach on Worldview-2 data.
Whereas classification performance on individual validation objects was not ideal but still acceptable (Table 3), accuracies considerably dropped when attempting to map spatially continuous areas (Table 4). Additional confusion was introduced particularly due to edge and adjacency effects, i.e. signal of one pixel affecting the signal of its neighboring pixels due to multiple scattering of light [110], and the high abundance of shadow (decreasing contrast, thereby making it harder to detect subtle spectral differences [111]), as could be derived from the spatial distribution of classification uncertainty (Figure 5b). Our rule-based post-classification procedure (Table A4) did resolve some visually evident misclassification errors (Figure 5 and Figure 6), but did not lead to a significant improvement in overall accuracy (Table 4). One potential way to resolve this issue would be to collect additional training data, specifically targeting these edge and shadow regions. As the main goal of the current study was to assess the maximum potential of remote sensing data to differentiate various functional urban green types, we rather focused our efforts on collecting clear examples (i.e., pure and bright objects) of each functional type, which could explain the bad performance of our model in shadowed areas. These additional training data can then either be combined with all other training data in one and the same model, or can be used separately to train a specific model dedicated to classifying shadowed areas. Rather than merely labeling shadow as a separate class in land cover maps, as traditionally done by the urban remote sensing community [112,113], separately treating shadowed and non-shadowed areas in a hierarchical classification approach is becoming more and more common practice in order to reveal the true land cover composition of complex urban areas [42,47]. A second approach which could reduce the negative impacts of object edges and shadow is the concept of multi-scale or hierarchical segmentation, i.e., generating multiple, nested segmentation products for the same area using different scale parameters [114]. A careful selection of the most appropriate segmentation scale for each class of interest could lead to a more realistic representation of the complex urban landscape (e.g., selecting different scales for big buildings versus small hedges) and could effectively reduce the number of edge objects. Additionally, the use of features derived from multiple segmentation scales has been shown to significantly improve land cover classification performance over single segmentation approaches [115,116].
The remote sensing based mapping workflow presented here did certainly not cover all types or aspects of the proposed functional urban green typology (Table 1) in an equally detailed way. Particularly, the distinction between different tree and shrub functional types based on their spatial configuration could be further improved using more in-depth contextual and spatial analysis (cf. Figure 7b,c), for instance based on specific metrics proposed by Wen, Huang, Liu, et al. [117] for semantic classification of urban trees (e.g., cohesion index, shape index, distance to road). Certain specific urban green types were not considered here due to their rarity in our study area and should be the focus of more, dedicated research (e.g., detection of water plants or intensive green roofs). Finally, due to its orientation, vertical green is not expected to be readily detectable using airborne remote sensing technology, stressing the need to look into complementary data sources, including Google Street View [118] or citizen science [119].

5. Conclusions

In this paper, we proposed a functional urban green typology and associated mapping workflow based on remote sensing data to facilitate the production of urban ecosystem service maps. The suggested typology, covering 23 functional types, may as such be used as a solid framework to produce a holistic view on urban ecosystem services through a simple value-transfer approach, but can also easily be extended using ancillary data for a more in-depth assessment of particular services. Our mapping workflow (comprised by a hierarchical, object-based random forest classification and subsequent rule-based post-classification correction) clearly demonstrated the potential, but also remaining limitations of remote sensing data for detailed urban green mapping. In general, airborne LiDAR data was found to be the most important data source for classification, but required complementary spectral data (preferably hyperspectral) when targeting urban green types at high thematic detail. The high spectral similarity between detailed urban green types and close interactions between different objects in the complex urban fabric (causing obscuring adjacency and shadow effects) were identified as the main sources of error, resulting in poor classification accuracies, especially for shrub and herbaceous vegetation classes (balanced accuracy <0.5). Nevertheless, we believe this work to provide a starting point for the further development of a functional urban green mapping workflow. In our opinion, the main focus for future research should be directed towards incorporating detailed information on phenology in the classification approach through the use of multi-temporal remote sensing data.

Author Contributions

Conceptualization, B.S., M.H. and J.D.; methodology, J.D. and B.S.; software, J.D.; validation, J.D. and B.S.; formal analysis, J.D.; investigation, J.D., B.S. and M.H.; resources, J.D.; data curation, J.D.; writing—original draft preparation, J.D.; writing—review and editing, B.S. and, M.H.; visualization, J.D.; supervision, B.S. and M.H.; project administration, B.S.; funding acquisition, B.S. and M.H. All authors have read and agreed to the published version of the manuscript.

Funding

The research presented in this paper is funded by the Belgian Science Policy Office in the framework of the STEREOIII program (UrbanEARS project (SR/00/307) and BelAir project (SR/01/354) for acquisition of hyperspectral data of Brussels).

Acknowledgments

The authors would like to express their gratitude to the Brussels Environmental Agency (BIM), and in particular to Mathias Engelbeen and Fabien Genard, for kindly providing Worldview-2 data and insights into the daily operation of urban green management in the city of Brussels. In addition, our thanks to Mike Alonzo for the interesting discussions on the general approach and specific methodology applied throughout this work. Finally, we would like to acknowledge Jingli Yan for his advice on object-based image processing and Joseph McFadden for our discussions regarding the functional urban green typology proposed in this study.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Sample size of training and testing datasets used for generating and testing Random Forest models in this study, compared to relative abundance of land cover classes in our twenty validation blocks.
Table A1. Sample size of training and testing datasets used for generating and testing Random Forest models in this study, compared to relative abundance of land cover classes in our twenty validation blocks.
Sample Sizes
(Number of Objects)
Relative Abundance
Validation Blocks (%)
IDLand Cover ClassTrainingTesting
10Deciduous broadleaf tree 40817822.99
11Evergreen coniferous tree 71270.57
20Deciduous broadleaf shrub 88341.68
21Evergreen coniferous shrub 18120.04
22Evergreen broadleaf shrub 63341.31
31Tall herb vegetation38180.14
32Flower bed28110.93
33Meadow & flower field63162.03
34Lawn1425914.41
40Arable land2280.00
41Vegetable garden25130.00
50Ext. green roof1350.89
60Roof25110612.24
70Pavement24010536.51
80Soil25182.00
90Water1233.03
100Cars163620.00
Table A2. Different combinations of object features used for training a random forest classification model. Each combination was tested in a hierarchical and non-hierarchical classification approach. For each feature (except for geometry features), both the object mean and standard deviation were included in the model.
Table A2. Different combinations of object features used for training a random forest classification model. Each combination was tested in a hierarchical and non-hierarchical classification approach. For each feature (except for geometry features), both the object mean and standard deviation were included in the model.
IDIncluded FeaturesNumber of Features
Hyperspectral
1NDVI (apex)2
2APEX indices (NDVI, NDWI-G, NDWI-W, NDWI-M, GrassIdx, RedGreen ratio, BlueGreen ratio, brightness)16
3APEX indices + LiDAR features (nDSM1, nDSM2, intensity, treeIndex)24
4APEX indices + LiDAR features + texture features (texture of nDSM1, nDSM2, intensity)72
5APEX indices + LiDAR features + texture features + geometry (area, perimeter, compact_circle)75
6APEX bands (218 original spectral bands)416
7APEX bands + LiDAR features424
8APEX bands + LiDAR features + texture features472
9APEX bands + LiDAR features + texture features + geometry475
10APEX MNF (30 MNF transformed APEX bands)60
11APEX MNF + LiDAR features 68
12APEX MNF + LiDAR features + texture features 116
13APEX MNF + LiDAR features + texture features + geometry119
Multispectral
14NDVI (worldview-2)2
15NDVI + WV bands (all 8 original Worldview-2 bands)18
16NDVI + WV bands + LiDAR features26
17NDVI + WV bands + LiDAR features + texture features74
18NDVI + WV bands + LiDAR features + texture features + geometry77
LiDAR only
19LiDAR features8
20LiDAR features + texture features56
21LiDAR features + texture features + geometry59
Table A3. Total accuracies attained through object-based classification of individual test objects using different sets of object features as input to the classification algorithm. More information on the specific features used in each set is included in Table A2. Results are shown both for the hierarchical modelling approach and non-hierarchical model. In the former case, 7 individual models were created to distinguish (1) vegetation from non-vegetation, (2) woody (tree, shrub) from non-woody vegetation, (3) trees and shrubs, (4) detailed woody vegetation classes, (5) lawn, agriculture, extensive green roofs and other herbaceous vegetation, (6) detailed non-woody vegetation classes and (7) roof, pavement, soil and water. Two non-hierarchical models were produced, one for basic vegetation classes and one for most detailed vegetation classes. Highest accuracies are indicated in bold.
Table A3. Total accuracies attained through object-based classification of individual test objects using different sets of object features as input to the classification algorithm. More information on the specific features used in each set is included in Table A2. Results are shown both for the hierarchical modelling approach and non-hierarchical model. In the former case, 7 individual models were created to distinguish (1) vegetation from non-vegetation, (2) woody (tree, shrub) from non-woody vegetation, (3) trees and shrubs, (4) detailed woody vegetation classes, (5) lawn, agriculture, extensive green roofs and other herbaceous vegetation, (6) detailed non-woody vegetation classes and (7) roof, pavement, soil and water. Two non-hierarchical models were produced, one for basic vegetation classes and one for most detailed vegetation classes. Highest accuracies are indicated in bold.
Feature Set IDHierarchical ModelNon-hierarchical Model
(1)(2)(3)(4)(5)(6)(7)BasicDetailed
Hyperspectral + LiDAR
10.910.690.510.420.440.340.470.380.34
20.940.790.610.550.640.550.670.570.54
30.960.910.810.690.690.660.910.830.77
40.970.930.860.700.690.650.920.850.78
50.960.930.850.690.690.660.920.850.78
60.930.780.600.550.650.610.650.580.54
70.950.900.820.670.710.660.920.830.77
80.950.920.850.690.710.660.940.850.76
90.950.920.860.690.720.670.930.850.77
100.940.840.660.570.710.690.760.650.62
110.960.920.860.740.730.710.910.850.80
120.980.930.850.720.750.710.920.860.79
130.980.930.860.720.730.700.920.850.78
Multispectral + LiDAR
140.900.650.540.440.470.380.390.340.30
150.920.750.670.520.570.510.540.530.50
160.950.900.830.630.720.640.880.820.73
170.960.920.850.670.690.650.890.830.76
180.960.920.860.690.700.630.900.830.76
LiDAR Only
190.920.880.800.620.660.640.860.780.70
200.940.900.820.640.660.620.870.820.72
210.940.890.830.650.670.610.860.820.73
Table A4. Overview of rule-based post-classification procedure used to correct for visually obvious classification errors occurring after initial object-based classification of our twenty validation blocks. Procedure developed and applied in eCognition software.
Table A4. Overview of rule-based post-classification procedure used to correct for visually obvious classification errors occurring after initial object-based classification of our twenty validation blocks. Procedure developed and applied in eCognition software.
Source of Confusion / ErrorClassification Rules
Shadowed areas wrongly classified as water* the following is only applied to objects with class probability below 0.7 *
If NDWI_X > -0.3 ---> water
Else if intensity > 0.3 AND NDVI > 0.6 ---> herbaceous vegetation
Else ---> pavement
If water AND area < 200 pixels AND NDVI > 0.6 ---> lawn
If water AND area < 200 pixels AND NDVI ≤ 0.6 ---> pavement
Water body wrongly classified as vegetation or pavement* the following is only applied to objects with class probability below 0.7 *
If NDWI_X > -0.3 AND relative border to water > 0 ---> water
Shaded or narrow pavement misclassified as vegetation* the following is only applied to objects with class probability below 0.7 *
If herbaceous vegetation AND intensity < 0.4 AND NDVI < 0.85 ---> pavement
If herbaceous vegetation AND NDVI < 0.2 ---> pavement
If lawn AND NDVI < 0.2 ---> pavement
If lawn AND intensity < 0.6 ---> pavement
If cropland AND intensity < 0.23 ---> pavement
If cropland AND intensity < 0.35 AND asymmetry > 0.84 ---> pavement
Small patches classified as croplandIf cropland AND area < 600 pixels AND intensity ≥ 0.4 ---> herbaceous vegetation
If cropland AND area < 600 pixels AND intensity < 0.4 ---> soil
Cars classified as shrubIf shrub enclosed by car ---> car
If shrub with relative border to car > 0.5 ---> car
If shrub with relative border to car > 0.24 AND NDVI < 0.35 ---> car
If shrub with relative border to car > 0.24 AND relative height difference < 0.13 ---> car
Shrub classified as carIf car AND area < 52 pixels AND NDVI > 0.3 ---> shrub
If car AND asymmetry > 0.8 AND NDVI > 0.2 ---> shrub
If car AND compactness > 4 AND NDVI > 0.2 ---> shrub
Roof edge misclassified as treeIf tree AND relative border to roof > 0.3 AND area < 500 pixels ---> roof
If tree AND relative border to roof > 0.3 AND asymmetry > 0.95 ---> roof
If tree fully enclosed by roof ---> roof
Small portions of trees or shrubs classified as roofIf roof AND area < 200 pixels AND relative border to tree > 0.4 ---> tree
If roof AND area < 200 pixels AND relative border to shrub > 0.4 ---> shrub
Edges of trees misclassified as shrub (due to low height)If shrub AND relative border to tree > 0.31 AND area < 300 pixels ---> tree
Small parts of evergreen coniferous trees (ECT) misclassified as deciduous broadleaf trees (DBT) and other way aroundIf DBT AND relative border to ECT > 0.3 AND area < 400 pixels ---> ECT
If ECT AND relative border to DBT > 0.3 AND area < 400 pixels ---> DBT
Table A5. Confusion matrix obtained for classifying all validation blocks according to the basic vegetation classes and using the best performing Random Forest model, i.e., a hierarchical model featuring hyperspectral and LiDAR data. Classification results are presented in the rows, reference classes in the columns. Red numbers indicate severe confusion (more than 5 % of the reference pixels of a certain class being classified as another class). Numbers marked in grey represent those confusions actively dealt with in the post-classification correction procedure.
Table A5. Confusion matrix obtained for classifying all validation blocks according to the basic vegetation classes and using the best performing Random Forest model, i.e., a hierarchical model featuring hyperspectral and LiDAR data. Classification results are presented in the rows, reference classes in the columns. Red numbers indicate severe confusion (more than 5 % of the reference pixels of a certain class being classified as another class). Numbers marked in grey represent those confusions actively dealt with in the post-classification correction procedure.
TreeShrubHerbaceousLawnCrop-landExt. green roofRoofPavementSoilWaterTotal
Tree678,68516,7313242620040826,503700054270732,883
Shrub26,77790,5768975985200243916,46421792229159,491
Herbaceous843710,68970,74827,3030082118,37937431484141,604
Lawn15,599691912,776401,16400550638,84414,082421495,311
Cropland40662985851818001326,34397870234,803
Ext. green roof20216028,0529117800037,269
Roof9911084341453028342,38911,58718080359,374
Pavement17,3119259487812,4910047111,027,23110,36336771,089,921
Soil145874168030650024317,11430,026053,327
Water6581716985900100513019188,35795,535
Total75,3984136,46899,071460,641028,488391,8421,168,17263,91296,9403,199,518
Table A6. Confusion matrix obtained after applying a rule-based post-classification correction procedure on the results presented in Table A5. Red numbers indicate severe confusion (more than 5% of the reference pixels of a certain class being classified as another class). Numbers marked in grey represent those confusions actively dealt with in the post-classification correction procedure.
Table A6. Confusion matrix obtained after applying a rule-based post-classification correction procedure on the results presented in Table A5. Red numbers indicate severe confusion (more than 5% of the reference pixels of a certain class being classified as another class). Numbers marked in grey represent those confusions actively dealt with in the post-classification correction procedure.
TreeShrubHerbaceousLawnCroplandExt. Green RoofRoofPavementSoilWaterTotal
Tree684,65819,6067223763040852826521755210721,925
Shrub19,52387,1428569825400259716,96820742089147,216
Herbaceous6811984569,67827,1620075312,3983333847130,827
Lawn11,79548809437387,31500527015,8108104421443,032
Cropland51046271420013601668307437
Ext. green roof20216028,0529117800037,269
Roof22881311441879028363,52913,98618730384,938
Pavement25,53312,839914728,5160050251,071,47316,78335991,172,915
Soil270278293435940024322,05230,30739961,013
Water16217511000132868089,37592,946
Total753,984136,46899,071460,641028,488391,8421,168,17263,91296,9403,199,518
Table A7. Confusion matrix obtained for classifying all validation blocks according to the most detailed vegetation classes and using the best performing Random Forest model, i.e., a hierarchical model featuring hyperspectral and LiDAR data. Red numbers indicate severe confusion (more than 5% of the reference pixels of a certain class being misclassified).
Table A7. Confusion matrix obtained for classifying all validation blocks according to the most detailed vegetation classes and using the best performing Random Forest model, i.e., a hierarchical model featuring hyperspectral and LiDAR data. Red numbers indicate severe confusion (more than 5% of the reference pixels of a certain class being misclassified).
DBTECTDBSECSEBSTall herbFlower bedMeadowLawnArable landVegetable GardenExt. Green RoofRoofPavementSoilWaterTotal
DBT655,649610610,6411095178544720125180040826,193692453570714,633
ECT7562936820935911318102000310767018,250
DBS18,114107829,566181118,785305862420665387000194289171317111793,782
ECS280015923980107252000301217803326
EBS706526410,118162626,1141371594125442850004677500784111262,554
Tall herb3860398010142756342859351000622661196438888
Flower bed876971673215916552564079544620002904295309120,121
Meadow69301483467458254712,03315,33830,74122,51700046913,8013315840112,604
Lawn15,1174823316366323720043198257401,164000550638,84414,082421495,311
Arable land4890201550135361600007414108478780
Vegetable garden35302329481811425824711030001318,87287065525,843
Ext. green roof20000002160028,0529117800037,269
Roof845146511645090122214530028342,38911,58718080359,374
Pavement16,849462316979452962942584200012,49100047111,027,2311036336771,089,921
Soil13867215416042729328998306500024317,11430,026053,327
Water657147012415612859000100513019188,35795,535
Total735,73718,24763,477806764,92420,64029,80948,622460,6410028,488391,8421,168,17263,91296,9403,199,518
DBT = Deciduous broadleaf tree; ECT = Evergreen coniferous tree; DBS = Deciduous broadleaf shrub; ECS = Evergreen coniferous shrub; EBS = Evergreen broadleaf shrub.
Table A8. Confusion matrix obtained after applying a rule-based post-classification correction procedure on the results presented in Table A7. Red numbers indicate severe confusion (more than 5% of the reference pixels of a certain class being classified as another class).
Table A8. Confusion matrix obtained after applying a rule-based post-classification correction procedure on the results presented in Table A7. Red numbers indicate severe confusion (more than 5% of the reference pixels of a certain class being classified as another class).
DBTECTDBSECSEBSTall herbFlower bedMeadowLawnArable landVegetable GardenExt. Green RoofRoofPavementSoilWaterTotal
DBT658,717362211,47130062561394750936050040852806291706206697,557
ECT996812,3513501122801116158000223049424,368
DBS14,37171729,121171818,598300761418264329000212211,5371436109590,491
ECS21801592187010719800018247202884
EBS40092089240149924620135059411703727000457540756699453,841
Tall herb25603500974233134280635100013961192457575
Flower bed580961449215882552563658939890002882962199117,438
Meadow57321473403521205112,01315,24530,47222,82200045293403015601105,814
Lawn11,4353602357288223518141375119387,315000527015,8108104421443,032
Arable land950130505025000016824201867
Vegetable garden41503025002211700013433464105570
Ext. green roof20000002160028,0529117800037,269
Roof222464433728060123218790028363,52913,98618730384,938
Pavement24,9236104974110667597382593581628,51600050251,071,47316,78335991,172,915
Soil263072154160468293418223359400024322,05230,30739961,013
Water1620001735463130000132868089,37592,946
Total735,73718,24763,477806764,92420,64029,80948,622460,6410028,488391,8421,168,17263,91296,9403,199,518
DBT = Deciduous broadleaf tree; ECT = Evergreen coniferous tree; DBS = Deciduous broadleaf shrub; ECS = Evergreen coniferous shrub; EBS = Evergreen broadleaf shrub

References

  1. Revi, A.; Satterthwaite, D.E.; Aragón-Durand, F.; Corfee-Morlot, J.; Kiunsi, R.B.R.; Pelling, M.; Roberts, D.C.; Solecki, W. Urban areas. In Climate Change 2014: Impacts, Adaptation and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change; Field, C.B., Barros, V.R., Dokken, D.J., Mach, K.J., Mastrandrea, M.D., Bilir, T.E., Chatterjee, M., Ebi, K.L., Estrada, Y.O., Genova, R.C., et al., Eds.; Cambridge University Press: Cambridge, UK, 2014; pp. 535–612. [Google Scholar]
  2. MEA. Ecosystems and Human Well-Being. Available online: http://pdf.wri.org/ecosystems_human_wellbeing.pdf (accessed on 9 March 2020).
  3. Bolund, P.; Hunhammar, S. Ecosystem services in urban areas. Ecol. Econ. 1999, 29, 293–301. [Google Scholar] [CrossRef]
  4. Niemelä, J.; Saarela, S.R.; Söderman, T.; Kopperoinen, L.; Yli-Pelkonen, V.; Väre, S.; Kotze, D.J. Using the ecosystem services approach for better planning and conservation of urban green spaces: A Finland case study. Biodivers. Conserv. 2010, 19, 3225–3243. [Google Scholar] [CrossRef]
  5. Andersson, E.; Mcphearson, T.; Kremer, P.; Gomez-baggethun, E. Scale and context dependence of ecosystem service providing units. Ecosyst. Serv. 2015, 12, 157–164. [Google Scholar] [CrossRef]
  6. Breuste, J.; Schnellinger, J.; Anna, S.Q. Urban Ecosystem services on the local level: Urban green spaces as providers. Ekológia 2013, 32, 290–304. [Google Scholar] [CrossRef] [Green Version]
  7. Gómez-baggethun, E.; Barton, D.N. Classifying and valuing ecosystem services for urban planning. Ecol. Econ. 2013, 86, 235–245. [Google Scholar] [CrossRef]
  8. Derkzen, M.L.; Van Teeffelen, A.J.A.; Verburg, P.H. Quantifying urban ecosystem services based on high- resolution data of urban green space: An assessment for Rotterdam, the Netherlands. J. Appl. Ecol. 2015, 52, 1020–1032. [Google Scholar] [CrossRef]
  9. Graça, M.; Alves, P.; Gonçalves, J.; Nowak, D.J.; Hoehn, R.; Farinha-Marques, P.; Cunha, M. Assessing how green space types affect ecosystem services delivery in Porto, Portugal. Landsc. Urban Plan. 2018, 170, 195–208. [Google Scholar] [CrossRef]
  10. Lavorel, S.; Grigulis, K.; Lamarque, P.; Colace, M.P.; Garden, D.; Girel, J.; Pellet, G.; Douzet, R. Using plant functional traits to understand the landscape distribution of multiple ecosystem services. J. Ecol. 2011, 99, 135–147. [Google Scholar] [CrossRef]
  11. De Ridder, K.; Adamec, V.; Bañuelos, A.; Bruse, M.; Bürger, M.; Damsgaard, O.; Dufek, J.; Hirsch, J.; Lefebre, F.; Pérez-Lacorzana, J.M.; et al. An integrated methodology to assess the benefits of urban green space. Sci. Total Environ. 2004, 334, 489–497. [Google Scholar] [CrossRef]
  12. Woodruff, S.C.; BenDor, T.K. Ecosystem services in urban planning: Comparative paradigms and guidelines for high quality plans. Landsc. Urban Plan. 2016, 152, 90–100. [Google Scholar] [CrossRef]
  13. Cameron, R.W.F.; Blanu, T. Green infrastructure and ecosystem services-is the devil in the detail? Ann. Bot. 2016, 118, 377–391. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Burkhard, B.; Kroll, F.; Nedkov, S.; Müller, F. Mapping ecosystem service supply, demand and budgets. Ecol. Indic. 2012, 21, 17–29. [Google Scholar] [CrossRef]
  15. Farrugia, S.; Hudson, M.D.; McCulloch, L. An evaluation of flood control and urban cooling ecosystem services delivered by urban green infrastructure. Int. J. Biodivers. Sci. Ecosyst. Serv. Manag. 2013, 9, 136–145. [Google Scholar] [CrossRef]
  16. Sharp, R.; Tallis, H.T.; Ricketts, T.; Guerry, A.D.; Wood, S.A.; Chaplin-Kramer, R.; Nelson, E.; Ennaanay, D.; Wolny, S.; Olwero, N.; et al. InVEST 3.6.0 User’s Guide. Available online: http://data.naturalcapitalproject.org/nightly-build/invest-users-guide/html/ (accessed on 9 March 2020).
  17. The Mersey Forest; Natural Economy Northwest; CABE; Natural England; Yorkshire Forward; The Northern Way; Design for London; Defra; Tees Valley Unlimited; Pleasington Consulting Ltd; et al. GI-Val: The Green Infrastructure Valuation Toolkit. Version 1.6 (Updated in 2018). 2010. Available online: https://bit.ly/givaluationtoolkit (accessed on 9 March 2020).
  18. Mexia, T.; Vieira, J.; Príncipe, A.; Anjos, A.; Silva, P.; Lopes, N.; Freitas, C.; Santos-Reis, M.; Correia, O.; Branquinho, C.; et al. Ecosystem services: Urban parks under a magnifying glass. Environ. Res. 2018, 160, 469–478. [Google Scholar] [CrossRef] [PubMed]
  19. Salmond, J.A.; Tadaki, M.; Vardoulakis, S.; Arbuthnott, K.; Coutts, A.; Demuzere, M.; Dirks, K.N.; Heaviside, C.; Lim, S.; Macintyre, H.; et al. Health and climate related ecosystem services provided by street trees in the urban environment. Environ. Heal. 2016, 15, 36. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Hermy, M.; Cornelis, J. Towards a monitoring method and a number of multifaceted and hierarchical biodiversity indicators for urban and suburban parks. Landsc. Urban Plan. 2000, 49, 149–162. [Google Scholar] [CrossRef]
  21. Mathieu, R.; Aryal, J.; Chong, A.K. Object-based classification of ikonos imagery for mapping large-scale vegetation communities in urban areas. Sensors 2007, 7, 2860–2880. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Cvejić, R.; Eler, K.; Pintar, M.; Železnikar, Š.; Haase, D.; Hansen, R.; Kabisch, N.; Lafortezza, R.; Strohbach, M.; Green Surge. A Typology of Urban Green Spaces, Eco-System Provisioning Services and Demands. Available online: https://greensurge.eu/working-packages/wp3/files/D3.1_Typology_of_urban_green_spaces_1_.pdf/D3.1_Typology_of_urban_green_spaces_v2_.pdf (accessed on 9 March 2020).
  23. Haase, D.; Jänicke, C.; Wellmann, T. Front and back yard green analysis with subpixel vegetation fractions from earth observation data in a city. Landsc. Urban Plan. 2019, 182, 44–54. [Google Scholar] [CrossRef]
  24. Bartesaghi-koc, C.; Osmond, P.; Peters, A. Mapping and classifying green infrastructure typologies for climate-related studies based on airborne remote sensing data. Urban For. Urban Green. 2019, 37, 154–167. [Google Scholar] [CrossRef]
  25. Small, C.; Okujeni, A.; Van der Linden, S.; Waske, B. Remote Sensing of Urban Environments. Compr. Remote Sens. 2018, 6, 96–127. [Google Scholar]
  26. Bertels, L.; Deronde, B.; Kempeneers, P.; Provoost, S.; Tortelboom, E. Potentials of airborne hyperspectral remote sensing for vegetation mapping of spatially heterogeneous dynamic dunes, a case study along the Belgian coastline. In Proceedings of the Dunes and Estuaries 2005’—International Conference on Nature Restoration Practices in European Coastal Habitats, Koksijde, Belgium, 19–23 September 2005; pp. 153–163. [Google Scholar]
  27. Degerickx, J.; Hermy, M.; Somers, B. Mapping functional urban green types using hyperspectral remote sensing. In Proceedings of the 2017 Joint Urban Remote Sensing Event, JURSE 2017, Dubai, UAE, 6–8 March 2017. [Google Scholar]
  28. Hirano, A.; Madden, M.; Welch, R. Hyperspectral image data for mapping wetland vegetation. Wetlands 2003, 23, 436–448. [Google Scholar] [CrossRef]
  29. Roth, K.L.; Roberts, D.A.; Dennison, P.E.; Alonzo, M.; Peterson, S.H.; Beland, M. Differentiating plant species within and across diverse ecosystems with imaging spectroscopy. Remote Sens. Environ. 2015, 167, 135–151. [Google Scholar] [CrossRef]
  30. Somers, B.; Asner, G.P. Tree species mapping in tropical forests using multi-temporal imaging spectroscopy: Wavelength adaptive spectral mixture analysis. Int. J. Appl. Earth Obs. Geoinf. 2014, 31, 57–66. [Google Scholar] [CrossRef]
  31. Thenkabail, P.S.; Lyon, J.G. Hyperspectral Remote Sensing of Vegetation; Thenkabail, P.S., Lyon, J.G., Eds.; CRC Press: Boca Raton, FL, USA, 2011; ISBN 9780429192180. [Google Scholar]
  32. Guanter, L.; Kaufmann, H.; Segl, K.; Foerster, S.; Rogass, C.; Chabrillat, S.; Kuester, T.; Hollstein, A.; Rossner, G.; Chlebek, C.; et al. The EnMAP Spaceborne Imaging Spectroscopy Mission for Earth Observation. Remote Sens. 2015, 7, 8830–8857. [Google Scholar] [CrossRef] [Green Version]
  33. Adão, T.; Hruška, J.; Pádua, L.; Bessa, J.; Peres, E.; Morais, R.; Sousa, J.; Adão, T.; Hruška, J.; Pádua, L.; et al. Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry. Remote Sens. 2017, 9, 1110. [Google Scholar] [CrossRef] [Green Version]
  34. Van der Linden, S.; Okujeni, A.; Canters, F.; Degerickx, J.; Heiden, U.; Hostert, P.; Priem, F.; Somers, B.; Thiel, F. Imaging Spectroscopy of Urban Environments. Surv. Geophys. 2018, 1–18. [Google Scholar] [CrossRef] [Green Version]
  35. Wetherley, E.B.; Roberts, D.A.; McFadden, J.P. Mapping spectrally similar urban materials at sub-pixel scales. Remote Sens. Environ. 2017, 195, 170–183. [Google Scholar] [CrossRef]
  36. Roberts, D.; Alonzo, M.; Wetherley, E.B.; Dudley, K.L.; Dennison, P.E. Multiscale Analysis of Urban Areas Using Mixing Models. In Integrating Scale in Remote Sensing and GIS; CRC Press: Boca Raton, FL, USA, 2017; pp. 247–282. [Google Scholar]
  37. Okujeni, A.; Van der Linden, S.; Tits, L.; Somers, B.; Hostert, P. Support vector regression and synthetically mixed training data for quantifying urban land cover. Remote Sens. Environ. 2013, 137, 184–197. [Google Scholar] [CrossRef]
  38. Abbasi, B.; Arefi, H.; Bigdeli, B.; Motagh, M.; Roessner, S. Fusion of hyperspectral and lidar data based on dimension reduction and maximum likelihood. ISPRS 2015, 40, 569–573. [Google Scholar] [CrossRef] [Green Version]
  39. Chen, J.; Du, P.; Wu, C.; Xia, J.; Chanussot, J. Mapping Urban Land Cover of a Large Area Using Multiple Sensors Multiple Features. Remote Sens. 2018, 10, 872. [Google Scholar] [CrossRef] [Green Version]
  40. Degerickx, J.; Roberts, D.A.; Somers, B. Enhancing the performance of Multiple Endmember Spectral Mixture Analysis (MESMA) for urban land cover mapping using airborne lidar data and band selection. Remote Sens. Environ. 2019, 221, 260–273. [Google Scholar] [CrossRef]
  41. Koetz, B.; Morsdorf, F.; Curt, T.; Van der Linden, S.; Borgniet, L.; Odermatt, D.; Alleaume, S.; Lampin, C.; Jappio, M.; Allgöwer, B. Fusion of imaging spectrometer and Lidar data using support vector machines for land cover classification in the context of forest fire management. In Proceedings of the 10th Intl. Symposium on Physical Measurements and Signatures in Remote Sensing, Davos, Switzerland, 12–14 March 2007. [Google Scholar]
  42. Priem, F.; Canters, F. Synergistic Use of LiDAR and APEX Hyperspectral Data for High-Resolution Urban Land Cover Mapping. Remote Sens. 2016, 8, 787. [Google Scholar] [CrossRef] [Green Version]
  43. Alonzo, M.; Bookhagen, B.; Roberts, D.A. Urban tree species mapping using hyperspectral and lidar data fusion. Remote Sens. Environ. 2014, 148, 70–83. [Google Scholar] [CrossRef]
  44. Tong, X.; Li, X.; Xu, X.; Xie, H.; Feng, T.; Sun, T.; Jin, Y.; Liu, X. A Two-Phase Classification of Urban Vegetation Using Airborne LiDAR Data and Aerial Photography. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4153–4166. [Google Scholar] [CrossRef]
  45. Chance, C.M.; Coops, N.C.; Plowright, A.A.; Tooke, T.R.; Christen, A.; Aven, N. Invasive shrub mapping in an urban environment from hyperspectral and LiDAR-derived attributes. Front. Plant Sci. 2016, 7, 1–19. [Google Scholar] [CrossRef] [Green Version]
  46. Degerickx, J.; Roberts, D.A.; McFadden, J.P.; Hermy, M.; Somers, B. Urban tree health assessment using airborne hyperspectral and LiDAR imagery. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 26–38. [Google Scholar] [CrossRef] [Green Version]
  47. Chen, F.; Jiang, H.; Van De Voorde, T.; Lu, S.; Xu, W.; Zhou, Y. Land cover mapping in urban environments using hyperspectral APEX data: A study case in Baden, Switzerland. Int. J. Appl. Earth Obs. Geoinf. 2018, 71, 70–82. [Google Scholar] [CrossRef]
  48. Franke, J.; Roberts, D.A.; Halligan, K.; Menz, G. Hierarchical Multiple Endmember Spectral Mixture Analysis (MESMA) of hyperspectral imagery for urban environments. Remote Sens. Environ. 2009, 113, 1712–1723. [Google Scholar] [CrossRef]
  49. Liu, T.; Yang, X. Mapping vegetation in an urban area with stratified classification and multiple endmember spectral mixture analysis. Remote Sens. Environ. 2013, 133, 251–264. [Google Scholar] [CrossRef]
  50. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
  51. Man, Q.; Dong, P.; Guo, H. Pixel- and feature-level fusion of hyperspectral and lidar data for urban land-use classification. Int. J. Remote Sens. 2015, 36, 1618–1644. [Google Scholar] [CrossRef]
  52. Onojeghuo, A.O.; Onojeghuo, A.R. Object-based habitat mapping using very high spatial resolution multispectral and hyperspectral imagery with LiDAR data. Int. J. Appl. Earth Obs. Geoinf. 2017, 59, 79–91. [Google Scholar] [CrossRef]
  53. Van der Linden, S.; Janz, A.; Waske, B.B.; Eiden, M.; Hostert, P. Classifying segmented hyperspectral data from a heterogeneous urban environment using support vector machines. J. Appl. Remote Sens. 2007, 1, 013543. [Google Scholar] [CrossRef]
  54. Zhang, C.; Cooper, H.; Selch, D.; Meng, X.; Qiu, F.; Myint, S.W.; Roberts, C.; Xie, Z. Mapping urban land cover types using object-based multiple endmember spectral mixture analysis. Remote Sens. Lett. 2014, 5, 521–529. [Google Scholar] [CrossRef]
  55. Grippa, T.; Lennert, M.; Beaumont, B.; Vanhuysse, S.; Stephenne, N.; Wolff, E. An Open-Source Semi-Automated Processing Chain for Urban Object-Based Classification. Remote Sens. 2017, 9, 358. [Google Scholar] [CrossRef] [Green Version]
  56. Lang, S.; Schöpfer, E.; Hölbling, D.; Blaschke, T.; Moeller, M.; Jekel, T.; Kloyber, E. Quantifying and Qualifying Urban Green by Integrating Remote Sensing, GIS, and Social Science Method. In Use of Landscape Sciences for the Assessment of Environmental Security; Springer: Dordrecht, The Netherlands, 2008; pp. 93–105. [Google Scholar]
  57. Mathieu, R.; Freeman, C.; Aryal, J. Mapping private gardens in urban areas using object-oriented techniques and very high-resolution satellite imagery. Landsc. Urban Plan. 2007, 81, 179–192. [Google Scholar] [CrossRef]
  58. Myint, S.W.; Gober, P.; Brazel, A.; Grossman-Clarke, S.; Weng, Q. Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sens. Environ. 2011, 115, 1145–1161. [Google Scholar] [CrossRef]
  59. Puissant, A.; Rougier, S.; Stumpf, A. Object-oriented mapping of urban trees using random forest classifiers. Int. J. Appl. Earth Obs. Geoinf. 2014, 26, 235–245. [Google Scholar] [CrossRef]
  60. Bowler, D.E.; Buyung-Ali, L.; Knight, T.M.; Pullin, A.S. Urban greening to cool towns and cities: A systematic review of the empirical evidence. Landsc. Urban Plan. 2010, 97, 147–155. [Google Scholar] [CrossRef]
  61. Litschike, T.; Kuttler, W. On the reduction of urban particle concentration by vegetation—A review. Meteorol. Z. 2008, 17, 229–240. [Google Scholar] [CrossRef]
  62. Smets, V.; Wirion, C.; Bauwens, W.; Hermy, M.; Somers, B.; Verbeiren, B. The importance of city trees for reducing net rainfall: Comparing measurements and simulations. Hydrol. Earth Syst. Sci. 2019, 23, 3865–3884. [Google Scholar] [CrossRef] [Green Version]
  63. Alvey, A.A. Promoting and preserving biodiversity in the urban forest. Urban For. Urban Green. 2006, 5, 195–201. [Google Scholar] [CrossRef]
  64. Cornelis, J.; Hermy, M. Biodiversity relationships in urban and suburban parks in Flanders. Landsc. Urban Plan. 2004, 69, 385–401. [Google Scholar] [CrossRef]
  65. Hegetschweiler, K.T.; De Vries, S.; Arnberger, A.; Bell, S.; Brennan, M.; Siter, N.; Stahl, A.; Voigt, A.; Hunziker, M. Linking demand and supply factors in identifying cultural ecosystem services of urban green infrastructures: A review of European studies. Urban For. Urban Green. 2017, 21, 48–59. [Google Scholar] [CrossRef] [Green Version]
  66. Rees, W.E.; Lancaster, K.; Rees, E. Bird communities and the structure of urban habitats. Can. J. Zool. 1979, 57, 2358–2368. [Google Scholar]
  67. Nowak, D.J.; Crane, D.E.; Stevens, J.C.; Hoehn, R.E.; Walton, J.T.; Bond, J. A Ground-Based Method of Assessing Urban Forest Structure and Ecosystem Services. Arboric. Urban For. 2008, 34, 347–358. [Google Scholar]
  68. Gosling, L.; Sparks, T.H.; Araya, Y.; Harvey, M.; Ansine, J. Differences between urban and rural hedges in England revealed by a citizen science project. BMC Ecol. 2016, 16, 45–55. [Google Scholar] [CrossRef] [Green Version]
  69. Harrison, P.A.; Vandewalle, M.; Sykers, M.T.; Berry, P.M.; Bugter, R.; de Bello, F.; Feld, C.K.; Grandin, U.; Harrington, R.; Haslett, J.R.; et al. Identifying and prioritising services in European terrestrial and freshwater ecosystems. Biodivers. Conserv. 2010, 19, 2791–2821. [Google Scholar] [CrossRef] [Green Version]
  70. Baik, J.J.; Kwak, K.H.; Park, S.B.; Ryu, Y.H. Effects of building roof greening on air quality in street canyons. Atmos. Environ. 2012, 61, 48–55. [Google Scholar] [CrossRef]
  71. Cameron, R.W.F.; Taylor, J.E.; Emmett, M.R. What’s ‘cool’ in the world of green façades? How plant choice influences the cooling properties of green walls. Build. Environ. 2014, 73, 198–207. [Google Scholar] [CrossRef] [Green Version]
  72. Carter, T.; Jackson, C.R. Vegetated roofs for stormwater management at multiple spatial scales. Landsc. Urban Plan. 2007, 80, 84–94. [Google Scholar] [CrossRef]
  73. Francis, L.F.M.; Jensen, M.B. Benefits of green roofs: A systematic review of the evidence for three ecosystem services. Urban For. Urban Green. 2017, 28, 167–176. [Google Scholar] [CrossRef]
  74. Mentens, J.; Raes, D.; Hermy, M. Green roofs as a tool for solving the rainwater runoff problem in the urbanized 21st century? Landsc. Urban Plan. 2006, 77, 217–226. [Google Scholar] [CrossRef]
  75. Pugh, T.A.M.; MacKenzie, A.R.; Whyatt, J.D.; Hewitt, C.N. Effectiveness of green infrastructure for improvement of air quality in urban street canyons. Environ. Sci. Technol. 2012, 46, 7692–7699. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  76. Raji, B.; Tenpierik, M.J.; Dobbelsteen, A. Van Den The impact of greening systems on building energy performance: A literature review. Renew. Sustain. Energy Rev. 2015, 45, 610–623. [Google Scholar] [CrossRef] [Green Version]
  77. Lindemann-matthies, P.; Junge, X.; Matthies, D. The influence of plant diversity on people’s perception and aesthetic appreciation of grassland vegetation. Biol. Conserv. 2010, 143, 195–202. [Google Scholar] [CrossRef] [Green Version]
  78. Orford, K.A.; Murray, P.J.; Vaughan, I.P.; Memmott, J. Modest enhancements to conventional grassland diversity improve the provision of pollination services. J. Appl. Ecol. 2016, 53, 906–915. [Google Scholar] [CrossRef] [Green Version]
  79. Dewaelheyns, V.; Lerouge, F.; Rogge, E.; Vranken, L. Garden Space: Mapping Trade-offs and the Adaptive Capacity of Home Food Production; Katholieke Universiteit Leuven: Leuven, Belgium, 2014. [Google Scholar]
  80. Specht, K.; Siebert, R.; Hartmann, I.; Freisinger, U.B. Urban agriculture of the future: An overview of sustainability aspects of food production in and on buildings. Agric. Hum. Values 2014, 31, 33–51. [Google Scholar] [CrossRef]
  81. Lin, B.B.; Philpott, S.M.; Jha, S. The future of urban agriculture and biodiversity-ecosystem services: Challenges and next steps. Basic Appl. Ecol. 2015, 16, 189–201. [Google Scholar] [CrossRef]
  82. Eurostat. Land cover and land use. In Eurostat Regional Yearbook; Brandmüller, T., Önnerfors, A., Eds.; European Union: Brussels, Belgium, 2011. [Google Scholar]
  83. Van de Voorde, T.; Canters, F.; Chan, J.C. Mapping Update and Analysis of the Evolution of Non-Built (Green) Spaces in the Brussels Capital Region. Available online: https://www.semanticscholar.org/paper/Mapping-update-and-analysis-of-the-evolution-of-in-Voorde-Canters/c978b166b9ea6b34191b2b4fad24da3f7e148393 (accessed on 9 March 2020).
  84. De Villers, J. Rapport over de Staat van Het Leefmilieu in Brussel, Semi-Natuurlijk Leefmilieu en Openbare Groene Ruimten; Leefmilieu Brussel: Brussels, Belgium, 2006. [Google Scholar]
  85. Degerickx, J.; Okujeni, A.; Iordache, M.D.; Hermy, M.; Van der Linden, S.; Somers, B. A novel spectral library pruning technique for spectral unmixing of Urban land cover. Remote Sens. 2017, 9, 565. [Google Scholar] [CrossRef] [Green Version]
  86. Rouse, J.W.; Hass, R.H.; Schell, J.A.; Deering, D.W. Monitoring vegetation systems in the great plains with ERTS. Third Earth Resour. Technol. Satell. Symp. 1973, 1, 309–317. [Google Scholar]
  87. Gao, B.C. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  88. Wilson, E.H.; Sader, S.A. Detection of forest harvest type using multiple dates of Landsat TM imagery. Remote Sens. Environ. 2002, 80, 385–396. [Google Scholar] [CrossRef]
  89. McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  90. Boardman, J.W.; Kruse, F.A. Automated spectral analysis: A geological example using AVIRIS data, north Grapevine Mountains, Nevada. In Proceedings of the Tenth Thematic Conference on Geologic Remote Sensing; Arbor, A., Ed.; Environmental Research Institute of Michigan: Ann Arbor, MI, USA, 1994. [Google Scholar]
  91. Yan, W.Y.; Shaker, A.; El-Ashmawy, N. Urban land cover classification using airborne LiDAR data: A review. Remote Sens. Environ. 2015, 158, 295–310. [Google Scholar] [CrossRef]
  92. O’Neil-Dunne, J.; MacFaden, S.; Royar, A. A versatile, production-oriented approach to high-resolution tree-canopy mapping in urban and suburban landscapes using GEOBIA and data fusion. Remote Sens. 2014, 6, 12837–12865. [Google Scholar] [CrossRef] [Green Version]
  93. Espindola, G.M.; Camara, G.; Reis, I.A.; Bins, L.S.; Monteiro, A.M. Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation. Int. J. Remote Sens. 2006, 27, 3035–3040. [Google Scholar] [CrossRef]
  94. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  95. Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  96. Feltynowski, M.; Kronenberg, J.; Bergier, T.; Kabisch, N.; Łaszkiewicz, E.; Strohbach, M.W. Challenges of urban green space management in the face of using inadequate data. Urban For. Urban Green. 2018, 31, 56–66. [Google Scholar] [CrossRef]
  97. Hendrix, R.; Liekens, I.; De Nocker, L.; Vranckx, S.; Janssen, S.; Lauwaet, D.; Brabers, L.; Broekx, S. Waardering van Ecosysteemdiensten in een Stedelijke Omgeving: Een Handleiding. Available online: https://docplayer.nl/39133495-Waardering-van-ecosysteemdiensten-in-een-stedelijke-omgeving-een-handleiding.html (accessed on 9 March 2020).
  98. VITO Nature Value Explorer. Available online: https://www.natuurwaardeverkenner.be/#/ (accessed on 27 April 2019).
  99. City of Antwerp Antwerpse Groentool. Available online: https://groentool.antwerpen.be/ (accessed on 27 April 2019).
  100. De Ridder, K.; Lauwaet, D.; Maiheu, B. UrbClim—A fast urban boundary layer climate model. Urban Clim. 2015, 12, 21–48. [Google Scholar] [CrossRef] [Green Version]
  101. Liu, Y.B.; De Smedt, F. WetSpa Extension, A GIS-based Hydrologic Model for Flood Prediction and Watershed Management Documentation and User Manual. Available online: https://www.vub.be/WetSpa/downloads/WetSpa_manual.pdf (accessed on 9 March 2020).
  102. Yan, J.; Zhou, W.; Han, L.; Qian, Y. Mapping vegetation functional types in urban areas with WorldView-2 imagery: Integrating object-based classification with phenology. Urban For. Urban Green. 2018, 31, 230–240. [Google Scholar] [CrossRef]
  103. Pelorosso, R.; Gobattoni, F.; Geri, F.; Leone, A. PANDORA 3.0 plugin: A new biodiversity ecosystem service assessment tool for urban green infrastructure connectivity planning. Ecosyst. Serv. 2017, 26, 476–482. [Google Scholar] [CrossRef]
  104. Brell, M.; Segl, K.; Guanter, L.; Bookhagen, B. 3D hyperspectral point cloud generation: Fusing airborne laser scanning and hyperspectral imaging sensors for improved object-based information extraction. ISPRS J. Photogramm. Remote Sens. 2019, 149, 200–214. [Google Scholar] [CrossRef]
  105. Marcinkowska-Ochtyra, A.; Zagajewski, B.; Raczko, E.; Ochtyra, A.; Jarocinska, A. Classification of High-Mountain Vegetation Communities within a Diverse Giant Mountains Ecosystem Using Airborne APEX Hyperspectral Imagery. Remote Sens. 2018, 10, 570. [Google Scholar] [CrossRef] [Green Version]
  106. Gudex-Cross, D.; Pontius, J.; Adams, A. Enhanced forest cover mapping using spectral unmixing and object-based classification of multi-temporal Landsat imagery. Remote Sens. Environ. 2017, 196, 193–204. [Google Scholar] [CrossRef]
  107. Tigges, J.; Lakes, T.; Hostert, P. Urban vegetation classification: Benefits of multitemporal RapidEye satellite data. Remote Sens. Environ. 2013, 136, 66–75. [Google Scholar] [CrossRef]
  108. Dechka, J.A.; Franklin, S.E.; Watmough, M.D.; Bennett, R.P.; Ingstrup, D.W. Classification of wetland habitat and vegetation communities using multi-temporal Ikonos imagery in southern Saskatchewan. Can. J. Remote Sens. 2002, 28, 679–685. [Google Scholar] [CrossRef]
  109. Lucas, R.; Rowlands, A.; Brown, A.; Keyworth, S.; Bunting, P. Rule-based classification of multi-temporal satellite imagery for habitat and agricultural land cover mapping. ISPRS J. Photogramm. Remote Sens. 2007, 62, 165–185. [Google Scholar] [CrossRef]
  110. Van der Linden, S.; Hostert, P. The influence of urban structures on impervious surface maps from airborne hyperspectral data. Remote Sens. Environ. 2009, 113, 2298–2305. [Google Scholar] [CrossRef]
  111. Adeline, K.R.M.; Chen, M.; Briottet, X.; Pang, S.K.; Paparoditis, N. Shadow detection in very high spatial resolution aerial images: A comparative study. ISPRS J. Photogramm. Remote Sens. 2013, 80, 21–38. [Google Scholar] [CrossRef]
  112. Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
  113. Tong, X.; Xie, H.; Weng, Q. Urban Land Cover Classification with Airborne Hyperspectral Data: What Features to Use? IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 3998–4009. [Google Scholar] [CrossRef]
  114. Drǎguţ, L.; Csillik, O.; Eisank, C.; Tiede, D. Automated parameterisation for multi-scale image segmentation on multiple layers. ISPRS J. Photogramm. Remote Sens. 2014, 88, 119–127. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  115. Duro, D.C.; Franklin, S.E.; Dubé, M.G. Multi-scale object-based image analysis and feature selection of multi-sensor earth observation imagery using random forests. Int. J. Remote Sens. 2012, 33, 4502–4526. [Google Scholar] [CrossRef]
  116. Johnson, B.; Bragais, M.; Endo, I.; Magcale-Macandog, D.; Macandog, P. Image Segmentation Parameter Optimization Considering Within- and Between-Segment Heterogeneity at Multiple Scale Levels: Test Case for Mapping Residential Areas Using Landsat Imagery. ISPRS Int. J. Geo-Inf. 2015, 4, 2292–2305. [Google Scholar] [CrossRef] [Green Version]
  117. Wen, D.; Huang, X.; Member, S.; Liu, H.; Liao, W. Semantic Classification of Urban Trees Using Very High Resolution Satellite Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1413–1424. [Google Scholar] [CrossRef]
  118. Branson, S.; Wegner, J.D.; Hall, D.; Lang, N.; Schindler, K.; Perona, P. From Google Maps to a fine-grained catalog of street trees. ISPRS J. Photogramm. Remote Sens. 2018, 135, 13–30. [Google Scholar] [CrossRef] [Green Version]
  119. Baker, F.; Smith, C.; Cavan, G. A Combined Approach to Classifying Land Surface Cover of Urban Domestic Gardens Using Citizen Science Data and High Resolution Image Analysis. Remote Sens. 2018, 10, 537. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The difference between hyperspectral signals (plotted as individual lines) and multispectral signals. For the latter, only the spectral band limits have been plotted as grey rectangles. The multispectral sensor (in this case Worldview-2) only records a single reflectance value per band. From the examples plotted here, it becomes clear that spectral data can be used to identify objects. The different parts of the spectrum have been indicated on top of the graph, where VIS = visible (0.4–0.7 µm), NIR = near-infrared (0.7–1.25 µm) and SWIR = short-wave infrared (1.25–2.5 µm).
Figure 1. The difference between hyperspectral signals (plotted as individual lines) and multispectral signals. For the latter, only the spectral band limits have been plotted as grey rectangles. The multispectral sensor (in this case Worldview-2) only records a single reflectance value per band. From the examples plotted here, it becomes clear that spectral data can be used to identify objects. The different parts of the spectrum have been indicated on top of the graph, where VIS = visible (0.4–0.7 µm), NIR = near-infrared (0.7–1.25 µm) and SWIR = short-wave infrared (1.25–2.5 µm).
Sustainability 12 02144 g001
Figure 2. Location and extent of remote sensing datasets used in this study relative to the full extent of the Brussels Capital Region, together with the location of training and validation blocks. The size of validation blocks has been exaggerated for visual purposes, its real size amounts to 100 × 100 m. Worldview-2 data was available for the entire Brussels Capital Region.
Figure 2. Location and extent of remote sensing datasets used in this study relative to the full extent of the Brussels Capital Region, together with the location of training and validation blocks. The size of validation blocks has been exaggerated for visual purposes, its real size amounts to 100 × 100 m. Worldview-2 data was available for the entire Brussels Capital Region.
Sustainability 12 02144 g002
Figure 3. Overview of the classification workflow of functional urban green types proposed in this study, including (1) spectral and structural feature calculation, (2) image segmentation, (3) training and validation data generation, (4) selection of best model to classify plant type, (5) application of best model, (6) post-classification correction and (7) rule-based classification to discern spatial configuration.
Figure 3. Overview of the classification workflow of functional urban green types proposed in this study, including (1) spectral and structural feature calculation, (2) image segmentation, (3) training and validation data generation, (4) selection of best model to classify plant type, (5) application of best model, (6) post-classification correction and (7) rule-based classification to discern spatial configuration.
Sustainability 12 02144 g003
Figure 4. Example of input raster datasets used for image segmentation in this study and the corresponding output for one of the twenty validation blocks, with (a) nDSM1, (b) nDSM2, (c) intensity and (d) segmentation result depicted on intensity raster. Whereas nDSM1 shows the main height differences between buildings, trees and ground, nDSM2 highlights small height variations of low objects, e.g., individual hedges and cars.
Figure 4. Example of input raster datasets used for image segmentation in this study and the corresponding output for one of the twenty validation blocks, with (a) nDSM1, (b) nDSM2, (c) intensity and (d) segmentation result depicted on intensity raster. Whereas nDSM1 shows the main height differences between buildings, trees and ground, nDSM2 highlights small height variations of low objects, e.g., individual hedges and cars.
Sustainability 12 02144 g004
Figure 5. Example of classification results obtained for basic classes and for one out of twenty validation blocks (100 × 100 m), including (a) first classification result, (b) first classification result where areas featuring high classification uncertainty (class membership probability < 0.7) are masked out (white), (c) result after post-classification correction and (d) manually digitized reference data.
Figure 5. Example of classification results obtained for basic classes and for one out of twenty validation blocks (100 × 100 m), including (a) first classification result, (b) first classification result where areas featuring high classification uncertainty (class membership probability < 0.7) are masked out (white), (c) result after post-classification correction and (d) manually digitized reference data.
Sustainability 12 02144 g005
Figure 6. Example of detailed classification results obtained for one out of twenty validation blocks (100 × 100 m), including (a) initial classification result based on Random Forest model, (b) result after post-classification correction and (c) manually digitized reference data.
Figure 6. Example of detailed classification results obtained for one out of twenty validation blocks (100 × 100 m), including (a) initial classification result based on Random Forest model, (b) result after post-classification correction and (c) manually digitized reference data.
Sustainability 12 02144 g006
Figure 7. Detailed classification of trees and shrubs based on spatial configuration for (a) one entire validation block (cf. Figure 6), (b) one particular tree row, which was only partly labeled as tree row and (c) an area in which a tree row is in direct contact with a tree patch, causing the tree row not to be detected at all.
Figure 7. Detailed classification of trees and shrubs based on spatial configuration for (a) one entire validation block (cf. Figure 6), (b) one particular tree row, which was only partly labeled as tree row and (c) an area in which a tree row is in direct contact with a tree patch, causing the tree row not to be detected at all.
Sustainability 12 02144 g007
Table 1. Functional urban green typology proposed in this study. For each type, an indication of its relevance for several provisioning, regulating, cultural and supporting ecosystem services is included (X = important contribution; (X) = low contribution; blank = (almost) no contribution). Functional types that are not being covered in the remote sensing based mapping part of this study have been greyed out.
Table 1. Functional urban green typology proposed in this study. For each type, an indication of its relevance for several provisioning, regulating, cultural and supporting ecosystem services is included (X = important contribution; (X) = low contribution; blank = (almost) no contribution). Functional types that are not being covered in the remote sensing based mapping part of this study have been greyed out.
Functional Urban
Green Type
DefinitionProvisioningRegulatingCulturalSupporting
FoodBiomassAir PurificationMicro-ClimateC SequestrationWater: QuantityWater: QualityNoise and VisualRecreationVisual AttractivenessInternal BiodiversitySupporting Biodiversity
TREES *1 *2
ForestArea dominated by densely planted or naturally grown trees. Canopy is closed, except for forests in early succession stage. The ecological function is more important compared to the production function. XXXXXXXXXXX
Tree plantationTrees planted at regular and nearly constant intervals from one another, usually with herbaceous or grassy undergrowth. Canopy is not necessarily closed. Trees are around the same age and size. Main function is production.XXXXXXXX X
Wood vergeA dense mixture of different species of trees and shrubs. Shape is linear; used as a fence next to e.g. roads, watercourses, private property. XXXXXX XX
Tree patch A group of trees together forming a closed canopy. XXXXX X
Tree rowTrees planted at regular and nearly constant intervals (3–15 m) in one or multiple rows. Trees are around the same age. Maximum width is 30 m. XXXXXX X X
EspalierTrees (or large shrubs) intensively pruned and guided in a way that all branches occur in one vertical plane. May also occur next to a building facade.(X) (X)X(X)XXX X X
Connected solitary treeA single tree positioned close to other trees (distance smaller than 15 m). XXXXX X X
Isolated solitary treeA single tree positioned in a relatively wide, open space (distance to nearest tree larger than 15 m). XXXXX X
SHRUBS *1
Scrub patch Large surface area covered with shrubs (width >15 m). XXXXXX (X)XX
HedgeA row of shrubs or small trees, planted within 1 m from each other and regularly (once or multiple times per year) sheared. Maximum width is 2 m. (X)(X)(X)(X)(X)X (X) X
Group of shrubsA group of shrubs of less than 15 m wide or a solitary individual, mainly planted for ornamental purposes.(X) (X)(X)(X)(X)(X) X
HERBACEOUS PLANTS
LawnHomogeneous patch dominated by grass species and regularly mown. X (X)(X) X
PastureDiverse patch dominated by grass species which is grazed by animals. (X)(X) XXXX
MeadowDiverse patch dominated by grass species which is infrequently mown. X (X)(X) XX X
Flower bedPatch planted with herbaceous non-grass species, mainly for ornamental purposes, also including plants planted in pots. XX(X)
Tall herb vegetationDense herbaceous vegetation of more than 1 m high. X (X)X XX
Flower fieldPatch dominated by herbaceous non-grass species, natural situation. (X) (X)(X) XXX
Water plantsPlants fully living in water, either submerged or near the water surface. (X) X(X)X
Arable landLarge land surface used for crop production.X (X) (X) X
Vegetable gardenSmall-scale farming. Typically, different crops are combined on a small piece of land.X (X) (X)(X)
Climbers and plant wallsClimbing or non-climbing plants (partially) covering a wall, with or without additional infrastructure to support the plants. This type also includes plants that spontaneously grow directly on (old) walls. XX XXX X (X)
Extensive green roofGreen roof with limited substrate depth (max. 20 cm) dominated by Sedum (leaf succulent) species and possibly other spontaneous herbaceous species. (X)X XX X(X)X
Intensive green roofGreen roof with substrate depth >20 cm, containing a mixture of grass, herbaceous plants, shrubs and/or trees.(X) (X)X(X)XX XX(X)X
*1 Each of the urban green functional types within this category should be further divided according to phenology (evergreen/deciduous) and leaf type (broadleaf/coniferous). *2 Each of the urban green functional types within this category should be further divided according to size (height).
Table 2. Overview of rule-based classification procedure used to distinguish different functional types of trees and shrubs based on their spatial configuration. Procedure developed and applied in eCognition software.
Table 2. Overview of rule-based classification procedure used to distinguish different functional types of trees and shrubs based on their spatial configuration. Procedure developed and applied in eCognition software.
Distinction Between…Classification Rules
Shrubs and hedgesIf shrub AND Asymmetry ≥ 0.8 AND width ≤ 2.5 m ---> hedge
If shrub AND compactness > 5 and width (main line) < 2 m ---> hedge
Group of shrubs and scrub patchIf shrub AND width ≥ 15 m ---> scrub patch
Else ---> group of shrubs
Tree patch, tree row, solitary tree connected and solitary tree isolatedIf tree AND asymmetry ≥ 0.8 AND width < 30 m ---> tree row
If tree AND area < 15 m² ---> solitary tree
If tree AND area < 150 m² AND asymmetry < 0.3 ---> solitary tree
If solitary tree AND distance to other trees > 15 m ---> solitary tree isolated
Else ---> tree patch
Detection of wood vergesMerge all trees and shrub classes together
If combined object has asymmetry ≥ 0.8 AND area > 150 m² AND relative contribution of both tree and shrub < 0.7 ---> wood verge
Table 3. Overview of the best overall and class-wise (balanced) accuracies attained for the object-based classification of individual test objects using three different data sources (APEX = hyperspectral + LiDAR; WV2 = multispectral + LiDAR; LiDAR = LiDAR only) in a hierarchical (H) and non-hierarchical (NH) classification approach. Results are shown for the classification of (a) basic (aggregated) urban green classes and (b) most detailed classes. Best accuracies are indicated in bold. More information on the specific object features used in each model is included in the Appendix A (Table A3).
Table 3. Overview of the best overall and class-wise (balanced) accuracies attained for the object-based classification of individual test objects using three different data sources (APEX = hyperspectral + LiDAR; WV2 = multispectral + LiDAR; LiDAR = LiDAR only) in a hierarchical (H) and non-hierarchical (NH) classification approach. Results are shown for the classification of (a) basic (aggregated) urban green classes and (b) most detailed classes. Best accuracies are indicated in bold. More information on the specific object features used in each model is included in the Appendix A (Table A3).
(a) BASIC CLASSESH-APEXH-WV2H-LiDARNH-APEXNH-WV2NH-LiDAR
Overall Accuracy0.880.860.850.890.880.85
Class-wise Accuracies
10Tree0.990.980.980.990.990.98
20Shrub0.900.900.920.930.930.93
30Herbaceous0.800.800.770.780.760.76
34Lawn0.920.920.900.930.930.90
40Agriculture0.760.710.590.740.660.59
50Ext. green roof0.800.700.700.800.700.60
60Roof0.980.980.960.980.980.96
70Pavement0.960.950.920.960.960.92
80Soil0.720.530.500.580.530.50
90Water1.000.830.830.830.670.67
100Cars0.940.940.940.930.940.94
(b) DETAILED CLASSESH-APEXH-WV2H-LiDARNH-APEXNH-WV2NH-LiDAR
Overall Accuracy0.810.760.740.810.770.75
Class-wise Accuracies
10Deciduous broadleaf tree 0.970.950.960.970.960.96
11Evergreen coniferous tree 0.870.610.550.810.610.55
20Deciduous broadleaf shrub 0.810.810.840.820.830.84
21Evergreen coniferous shrub 0.610.530.510.590.570.52
22Evergreen broadleaf shrub 0.730.710.710.780.690.72
31Tall herb vegetation0.800.780.770.740.800.80
32Flower bed0.680.630.630.540.500.54
33Meadow and flower field0.810.770.740.780.740.77
34Lawn0.920.920.900.930.930.92
40Arable land0.940.750.560.870.690.56
41Vegetable garden0.650.690.610.620.650.57
50Ext. green roof0.800.700.700.800.700.70
60Roof0.980.980.960.990.980.97
70Pavement0.960.950.920.960.960.92
80Soil0.720.530.500.640.530.50
90Water1.000.830.830.830.670.67
100Cars0.940.940.940.950.950.96
Table 4. Overall and class-wise (balanced) classification accuracies achieved after applying the best performing Random Forest model (cf. Table 3) on the twenty validation blocks (“initial classification”), after discarding zones with uncertainty higher than 0.7 and after applying a rule-based post-classification correction algorithm (cf. Table A4). n denotes the number of image pixels available per class. Due to the absence of agricultural lands in our validation dataset, both arable land and vegetable gardens have been omitted here.
Table 4. Overall and class-wise (balanced) classification accuracies achieved after applying the best performing Random Forest model (cf. Table 3) on the twenty validation blocks (“initial classification”), after discarding zones with uncertainty higher than 0.7 and after applying a rule-based post-classification correction algorithm (cf. Table A4). n denotes the number of image pixels available per class. Due to the absence of agricultural lands in our validation dataset, both arable land and vegetable gardens have been omitted here.
(a) BASIC CLASSESInitial ClassificationRetaining Only Class Probability > 0.7Post-Classification
Correction
Overall Accuracy0.86
0.82
0.94
0.92
0.87
Kappa0.84
Per classAccn (×10³)AccReduction nAcc
Tree0.90754.00.940.070.93
Shrub0.55136.50.700.330.57
Herbaceous0.4899.10.820.340.52
Lawn0.78460.60.870.190.85
Ext. green roof0.7528.50.970.060.75
Roof0.95391.80.980.110.94
Pavement0.911168.20.940.200.86
Soil0.5563.90.750.460.49
Water0.9296.90.990.090.96
Total 3199.5 0.17
(b) DETAILED CLASSESInitial ClassificationRetaining Only Class Probabilities > 0.7Post-Classification
Correction
Overall accuracy0.840.940.86
Kappa0.790.920.81
Per classAccn (×10³)AccReduction nAcc
DBT0.89735.70.940.140.93
ECT0.5118.20.940.430.50
DBS0.3063.50.710.680.31
ECS0.728.10.900.750.76
EBS0.4164.90.890.720.45
Tall herb0.3120.60.600.520.30
Flower bed0.2729.80.730.530.32
Meadow & flower field0.2648.60.510.410.28
Lawn0.78460.60.870.210.85
Ext. green roof0.7528.50.970.060.75
Roof0.95391.80.980.120.94
Pavement0.911168.20.940.200.86
Soil0.5563.90.750.470.49
Water0.9296.90.990.090.96
Total 3199.5 0.21

Share and Cite

MDPI and ACS Style

Degerickx, J.; Hermy, M.; Somers, B. Mapping Functional Urban Green Types Using High Resolution Remote Sensing Data. Sustainability 2020, 12, 2144. https://0-doi-org.brum.beds.ac.uk/10.3390/su12052144

AMA Style

Degerickx J, Hermy M, Somers B. Mapping Functional Urban Green Types Using High Resolution Remote Sensing Data. Sustainability. 2020; 12(5):2144. https://0-doi-org.brum.beds.ac.uk/10.3390/su12052144

Chicago/Turabian Style

Degerickx, Jeroen, Martin Hermy, and Ben Somers. 2020. "Mapping Functional Urban Green Types Using High Resolution Remote Sensing Data" Sustainability 12, no. 5: 2144. https://0-doi-org.brum.beds.ac.uk/10.3390/su12052144

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop