Next Article in Journal
SAR Interferometry for Sinkhole Early Warning and Susceptibility Assessment along the Dead Sea, Israel
Next Article in Special Issue
Mapping Human Settlements with Higher Accuracy and Less Volunteer Efforts by Combining Crowdsourcing and Deep Learning
Previous Article in Journal
The Continental Impact of European Forest Conservation Policy and Management on Productivity Stability
Previous Article in Special Issue
Methodology for Participatory GIS Risk Mapping and Citizen Science for Solotvyno Salt Mines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fusing High-Spatial-Resolution Remotely Sensed Imagery and OpenStreetMap Data for Land Cover Classification Over Urban Areas

1
School of Geodesy and Geomatics, Wuhan University, Wuhan 430072, China
2
Electronic Information School, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Submission received: 8 November 2018 / Revised: 25 December 2018 / Accepted: 28 December 2018 / Published: 7 January 2019
(This article belongs to the Special Issue Citizen Science and Earth Observation II)

Abstract

:
Land cover classification of urban areas is critical for understanding the urban environment. High-resolution remotely sensed imagery provides abundant, detailed spatial information for urban classification. In the meantime, OpenStreetMap (OSM) data, as typical crowd-sourced geographical information, have been an emerging data source for obtaining urban information. In this context, a land cover classification method that fuses high-resolution remotely sensed imagery and OSM data is proposed. Training samples were generated by integrating the OSM data and multiple information indexes. OSM data, which contain class attributes and location information of urban objects, served as the labels of initial training samples. Multiple information indexes that reflect spectral and spatial characteristics of different classes were utilized to improve the training set. Morphological attribute profiles were used because the structural and contextual information of images was effective in distinguishing the classes with similar spectral characteristics. Moreover, a road superimposition strategy that considers road hierarchy was developed because OSM data provide road information with high completeness in the urban area. Experiments were conducted on the data captured over Wuhan city, and three state-of-the-art approaches were adopted for comparison. Results show that the proposed approach obtains satisfactory results and outperforms the other comparative approaches.

1. Introduction

The rapid process of urbanization has dramatically changed the distribution of urban land cover in recent years. The land cover information of urban areas is important because it helps humans understand the change trend of their living environment. The urban land cover information can also help government agencies and other policy makers make decisions on urban planning and management [1]. However, owing to the high cost and low efficiency, humans are not a satisfactory resource for collecting land cover information in most cases. Therefore, the geographic information provided by remote sensing technology or social sensors must be utilized for land cover classification over urban areas [2].
High-resolution remotely sensed imagery provides detailed spatial and structural information and thus offers new avenues for precise land cover classification over urban areas [3,4]. However, high spatial resolution does not indicate high precision for computer interpretation. In the high-resolution remotely sensed imagery, the urban areas show high intra-class and low inter-class variability. The increases in intra-class variation and the decreases in inter-class variation reduce the separability of classes in the spectral domain, and this condition brings difficulty in distinguishing different classes with the exclusive use of spectral characteristics of image [5,6]. Hence, considerable research exploited the spatial information of high-resolution remotely sensed imagery and considered textural and structural features as important information sources to complement spectral properties for accurate classification [7,8,9]. The classification performance relies on the quality and quantity of training samples [10]. The conventional ways to collect training samples, such as field surveys and visual inspections, are always time consuming, laborious, costly, and prone to artificial errors [11]. In this context, active learning and semi-supervised learning methods were adopted to reduce the manual work for sample collection [12]. By selecting the most informative data, active learning minimizes the amount of samples required to be labeled by experts [13,14]. Semi-supervised learning uses the information exploited from unlabeled data to improve classification performance [15,16]. In the meantime, the wide acceptance of open source geographical data has increased the attention to OpenStreetMap (OSM) in urban environment understanding [17]. OSM is a crowd-sourced project that aims to create a set of map data that is free to use, edit, upload, and download [18]. Volunteers can delineate an object depending on satellite image basemaps and label the object with some predefined tags (e.g., name, land cover/land use, and address) or custom tags (e.g., opening time of a hospital and website of a university). Name and land cover/land use are the most commonly used tags. Thus, OSM data have a large amount of land cover information for assisting urban land cover mapping [19]. Since 2007, the number of registered users and the track points of OSM have increased considerably. The OSM data are equal to proprietary data in terms of accuracy and coverage in certain countries and regions [20].
In recent years, the integration of high-resolution remotely sensed imagery and OSM for land cover classification has drawn increasing attention. OSM contains geographic data with class attributes and location information, which benefit the collection and labeling of training samples for remote sensing classification. In [21], a land use/land cover mapping approach using time-series imagery and training information extracted from OSM data was introduced. Three relatively noise-tolerant algorithms—namely naïve Bayes, decision tree (C4.5 algorithm), and random forest (RF)—were used to reduce the influence of OSM noise on the classification performance. In [22], remote sensing images and OSM data were combined for land use/land cover classification. The contribution index (CI), which represents the activeness of user behavior, was utilized to assess the quality of the OSM data. The OSM data with high CI were selected as training samples preferentially. In [23], a high-resolution remote sensing image classification method using OSM data was proposed. Morphological erosion, super-pixel segmentation, and cluster analysis were used to refine training samples derived from the OSM data. The OSM road data were directly superimposed on the classification map due to the high accuracy and completeness. For the high-spatial-resolution remotely sensed imagery, large amounts of structural and detailed information are available. These existing OSM-based classification methods utilize only the spectral characteristics of image and ignore the spatial information inherited in the object distribution. The shadow information becomes clear due to the increase in spatial resolution [24]. Given that shadows usually result in a loss of information and distortion of the affected regions, precise recognition of shadows is important for the analysis of high-spatial-resolution remotely sensed imagery [25]. However, the shadow information is not involved in the OSM data. Consequently, the derived training set lacks the shadow samples.
In this study, a spectral-spatial classification framework that fuses high-resolution remotely sensed imagery and OSM data was developed. We derived training samples from the OSM data because they contained category and location information. However, OSM data may have contained errors, such as position errors and attribute errors, due to the unprofessional production process and the absence of data quality control. The multiple information indexes were introduced to refine the samples derived from the OSM data for decreasing the aforementioned errors and supplementing class information. Information indexes could reflect the spectral and spatial characteristics of specific classes. Thus, they could be used to label samples for these classes. In particular, normalized difference vegetation index (NDVI), normalized difference water index (NDWI), morphological building index (MBI), and bare soil index (BSI) were utilized to purify the samples of corresponding classes extracted from the OSM data, whereas morphological shadow index (MSI) was adopted to derive shadow samples. Considering the complex land cover distribution in the urban areas, we used extended morphological attribute profiles (APs) to model the structural and spatial information of high-resolution images. In addition, principle component analysis (PCA) was adopted on the original image and the derived APs to reduce the data redundancy and select informative features. On the basis of the generated training samples and extracted features, the initial classification result was achieved using RF. Considering that the OSM road data contained road location and hierarchy information with high completeness, the OSM road information was designated to be superimposed on the classification map to reduce the misclassification between roads and other artificial architectures. An approach that generated road buffer with an adaptive radius in accordance with road hierarchy was developed. Experiments were conducted on the data covering the area within the third ring road of Wuhan. Comparison with three state-of-the-art methods illustrated that the proposed framework achieved satisfactory classification results.
The rest of the paper is organized as follows: Section 2 introduces the methodology of the proposed framework. The datasets and experimental results are provided in Section 3, followed by a detailed discussion and a comparison with other methods in Section 4. Section 5 elaborates on the conclusions.

2. Methodology

The land cover information over urban areas was obtained using the proposed framework of three steps: sample generation, feature extraction, and road superposition. Specifically, the OSM data without roads were utilized to obtain the initial samples. The samples were successively refined by multiple information indexes to generate candidate samples. Different classes of training samples with equal amounts were randomly selected from candidate samples. Then, APs were computed on the PCA result of the images. The dimensions of AP features were decreased by PCA before being introduced into an RF classifier. Lastly, the OSM roads were buffered with adaptive radii and superimposed on the classification map. The pixels overlapped by the buffered roads were relabeled as the road class. The flowchart of the proposed framework is presented in Figure 1.

2.1. Sample Generation

In this section, a novel sample generation method that integrates OSM data and multiple information indexes is proposed. The OSM data contained abundant information of ground object categories, which provided training sample labels for image classification. Notably, some errors existed due to the user-generated process of OSM. In the meantime, information indexes such as NDVI, NDWI, and MBI were adopted to extract training samples on the basis of the distinct spectral or structural characteristics of specific classes. In the proposed method, the samples generated from the OSM data were refined by the information indexes. In other words, the information indexes were calculated on pixels or objects of the corresponding class in OSM instead of on the entire image. For example, NDVI was calculated only in areas labeled as vegetation in OSM. Specifically, NDVI, NDWI, MBI, and BSI were introduced to purify samples of vegetation, water, buildings, shadows, and soils, respectively. MSI was used to derive the shadow samples.

2.1.1. Multiple Information Indexes

NDVI [26]: Given that vegetation has high near-infrared reflectance and low red-light reflectance, NDVI is defined as follows:
N D V I = ( N I R R E D ) ( N I R + R E D ) ,
where N I R and R E D denote the digital numbers (DNs) of the near-infrared and red-light bands of images, respectively.
NDWI [27]: Water has high-reflectance in green band and low reflectance in near-infrared band. On the basis of the spectral characteristic, NDWI is computed as:
N D W I = ( G R E E N N I R ) ( G R E E N + N I R ) ,
where G R E E N and N I R denote the DNs of the green-light and near-infrared bands of images, respectively.
MBI [28]: Buildings are brighter than their surrounding shadows. Thus, the basic idea of MBI is to build the relationship between the spectral-structural characteristics and the morphological operators. Considering the characteristics of brightness, local contrast, size, and directionality, MBI can be represented as follows:
M B I = d , s D M P W T H ( d , s ) D × S
D M P W T H ( d , s ) = | M P W T H ( d , ( s + Δ s ) ) M P W T H ( d , s ) |
M P W T H ( d , s ) = b γ b r e ( d , s ) ,
where D and S denote the numbers of directionality and scale, respectively, and M P W T H ( d , s ) denotes the morphological profiles (MPs) of white top-hat performed on the original image b with directionality d and scale s .
MSI [29]: Given that shadows are darker than their surrounding objects, the calculation of MSI can be extended from MBI by replacing the white top-hat with the black top-hat transformation. MSI can be formulated as:
M S I = d , s D M P B T H ( d , s ) D × S
D M P B T H ( d , s ) = | M P B T H ( d , ( s + Δ s ) ) M P B T H ( d , s ) |
M P B T H ( d , s ) = φ b r e ( d , s ) b .
BSI: Bare soil can be extracted from HSV color space. HSV color space, as a common color space, uses hue, saturation, and value to describe an image and can be used to extract soils from remote sensing image.

2.1.2. Sample Generation Method

The sample generation method includes the following steps:
  • Sample labeling based on OSM data: the category information of OSM data is used to label the samples in the high-resolution remotely sensed imagery depending on their spatial coordination.
  • Calculation of multiple information indexes: MBI, MSI, NDWI, and BSI are computed to indicate the area of buildings, shadows, water, and soils, respectively. Moreover, NDVI is utilized to extract the forest and grass information.
  • Sample collection based on multiple information indexes: for NDVI and NDWI, the Ostu method is adopted to select the optimal threshold based on the histogram of information indexes of the OSM-label vegetation and water samples. For MBI, MSI, and BSI, the threshold is selected by experts. By applying the threshold on the obtained information indexes, we achieve the samples belonging to the corresponding classes.
  • Training sample generation: the intersection of sample sets provided by OSM data and multiple information indexes are selected to construct the training sample set. The OSM data do not contain the shadow information. Thus, only MSI is used to generate the shadow samples.
  • Training sample refinement: considering that some samples may be labeled as different classes by dissimilar volunteers, the regions that are assigned to more than one category are removed to refine the training set.

2.2. Morphological Attribute Profiles

For high-resolution remotely sensed imagery, the intra-class variation of spectral features increases but the inter-class variation decreases. The classification of high-resolution imagery cannot benefit considerably from the single use of spectral characteristics. In the meantime, high-resolution remotely sensed imagery can delineate the spatial features of surface objects clearly. By introducing spatial features, the accuracy of classification can be largely improved. APs provide a multilevel spatial characterization of an image by the sequential application of morphological attribute filters. Morphological attribute filters are powerful tools for modeling different specifications of structural information [30]. These filters are connected operators, and thus images are processed by only considering their connected components. In other words, with the operations of morphological attribute filters, connected components of the processed image will merge, enlarge, shrink, split, appear, or disappear. A connected component is composed of a group of iso-intensity pixels that are considered to be connected based on a connectivity rule. Four-connected and eight-connected rules are two widely used connectivity rules in which a pixel is regarded as connected to its four or eight neighboring pixels, respectively.
Two fundamental morphological attribute filters are attribute thinning and attribute thickening. The attribute filters process an image in accordance with a criterion that is a logical prediction of a generic attribute. The criteria implement a comparison between the attribute value calculated on a connected component and a predefined threshold [31]. Specifically, a criterion R that compares the attribute A of a connected component C with an area threshold λ can be expressed as:
R = ( A ( C ) λ ) .
To derive APs on an image, the criterion is evaluated on all connected components of the image, which determines whether a connected component will be kept or merged. If the criterion is fulfilled (the value of the criterion is true), then the connected component will be preserved; otherwise, it will be combined to one of its adjacent connected components. The combined adjacent connected component will be the one with the closest lower or higher attribute value depending on whether the filter is thinning or thickening [32]. That is, if the attribute filter is thinning, then the combined connected component will be the one with the closest lower value. Otherwise, it will be the connected component with the closest higher value.
An important property of the criteria is increasingness. Increasing criteria satisfy the following condition: if the criterion is verified for a connected component, then it will be also verified for all its supersets [33]. Increasing attributes (e.g., area) and inequation relations (e.g., >) can form increasing criteria. Furthermore, increasing criteria lead to increasing filters, which transforms the thinning and thickening filters into opening and closing filters, respectively.
APs are obtained by applying a sequence of attribute thinning and thickening filters on the image. For a greyscale image, the APs can be defined as:
A P s = { A P s 1 , A P s 2 , , A P s k }
A P s k = { φ C 1 ( g ) , φ C 2 ( g ) , , φ C n ( g ) , g , λ C 1 ( g ) , λ C 2 ( g ) , , λ C n ( g ) } ,
where φ C k ( g ) and λ C k ( g ) denote attribute thinning and thickening output of the origin greyscale image g with the k -th criterion, respectively. Analogous to extended MPs, the extended APs (EAPs) can be defined as the APs extracted from the principle components of an image [8]. Thus, EAP can be formularized as:
E A P s = { A P s ( g 1 ) , A P s ( g 2 ) , , A P s ( g n ) } ,
where g n denotes the n -th band of the image.

2.3. Road Superposition

Roads and buildings usually reflect similar spectral characteristics because of their similar construction materials. The severe misclassification between roads and buildings is difficult to avoid regardless of the accuracy of the selected training samples. OSM, which was initially designated to collect street data by volunteers, has a higher completeness of road data than that of the other classes. OSM roads have reached a completeness of more than 80% worldwide [34] and a higher value with navigation companies contributing to OSM in countries such as the US and China. To avoid the severe misclassification, OSM roads were not selected as training samples. Instead, we superimposed OSM road data upon the classification map to fully utilize their excellent completeness [35,36].
Road buffer is conducted before superimposition, given that OSM road data are in line format. In traditional methods, the OSM road buffer is generated with a fixed-length radius [23]. However, road buffers with a fixed-length radius cannot represent roads of all hierarchies because roads belonging to different hierarchies have dissimilar widths. To address the issue, an approach that derives the OSM road buffer with an adaptive radius was developed.
In our method, the widths of the road buffer radius were determined in accordance with its hierarchy. The spatial resolution of the remote sensing image was also considered. In general, the radius should satisfy the following conditions:
{ R = k d W min R W max ,
where R denotes the radius of road buffer, d denotes the spatial resolution of image, k denotes the multiple, and W min and W max are the minimum and maximum road widths recommended by the related standards, respectively.
According to our knowledge, Technical Standard of Highway Engineering, which is the current standard associated with roads and traffic in China, recommends the width range of different hierarchies of roads. Thus, Table 1 shows the estimated radii of road buffer at different hierarchies for an image with a spatial resolution of 4 m.

3. Experiments

3.1. Study Area and Datasets

Wuhan is one of the largest cities of central China. The study area is located within the third ring road of Wuhan, which is the urban area of this city. It covers a region of approximately 500 km2, and occupies parts of seven districts: Jianghan, Jiangan, Qiaokou, Hanyang, Wuchang, Qingshan, and Hongshan. Figure 2 shows a GaoFen-2 multispectral image acquired on 1 September 2016. The image contains 5544 × 4720 pixels and has a spatial resolution of 4 m. Four channels, namely blue, green, red, and near-infrared, are incorporated in the image.
A dataset of OSM covering the study area, which was downloaded from https://download.geofabrik.de/asia/china.html, was used. The dataset was composed of eight shapefile layers called points, places, waterways, railways, roads, natural, land use, and buildings, respectively.
The GaoFen-2 image was preprocessed with a series of steps including radiometric calibration, atmospheric correction, and georeferencing [37]. The radiometric calibration was conducted on the GaoFen-2 image to convert DN to top-of-atmosphere (TOA) reflectance with parameters provided by the China Centre for Resources Satellite Data and Application. Then, the TOA reflectance was converted to ground surface reflectance by atmospheric correction with the Fast Line-of-sight Atmospheric Analysis of Spectral Hypertube module of the Environment for Visualizing Images software. Lastly, the GaoFen-2 image was georeferenced to the OSM data to remove spatial offset by first-order polynomial transformation of pairwise control points.

3.2. Experimental Setting

Four equal-sized sub regions with a size of 702 × 690 in pixels were selected as test regions. The sub-images and the corresponding ground truth annotations of the test regions are shown in Figure 3. Seven typical classes were considered: buildings, water, forests, grasses, roads, soils, and shadows. Table 2 presents the number of testing samples for each class in the test regions. RF [38] was employed as a classifier in the experiments.
The minimal size, maximal size, and interval of the structure element used for generating MBI and MSI were set as 24, 48, and 4 pixels, respectively. The number of training samples of each class was 300 pixels. As for APs, the area was chosen as the attribute and the corresponding thresholds were selected as 25, 100, 400, and 1600 pixels. The number of trees for constructing the RF classifier was 400. The overall accuracy (OA), Kappa coefficient, and F1-score [39] for each class were used to evaluate the classification performance.

3.3. Experiment Results

The classification maps and accuracies of the four test regions are shown in Figure 4 and Table 3, respectively. From Figure 4, it can be clearly observed that the proposed method gave satisfactory classification results. The objects in the classified image were close to the real ground features in terms of size and shape. In particular, the well-shaped water, forests, roads, and shadows showed explicit boundary and were separate from their surrounding objects.
As shown in Table 3, the proposed framework achieved a classification accuracy of 89.4%. Water received fairly optimal accuracies among all classes, and the accuracies of water in the four test regions were 97.8%, 95.6%, 98.6%, and 95.7%, respectively. Moreover, roads were also well identified with an accuracy of 93.2%, which indicated that the OSM data had excellent completeness in the Wuhan urban area. By employing the road superimposition strategy, the structure and continuity of the roads were preserved. Buildings obtained quite a high accuracy of 84.7% due to the noninterference of road classification. Although forests and grasses showed similar spectral and spatial characteristics, they were correctly recognized with accuracies of 79.2% and 87.9%, respectively. For shadow and soils, the classification accuracies were also acceptable and reached 82.1% and 77.9%, respectively.

4. Discussion

4.1. Method of Sample Generation

The class distribution of samples in feature space was derived to analyze the effectiveness of the proposed sample generation method. A comparison between the class distribution of the original OSM samples and that of the samples generated by the proposed method is presented in Figure 5, where the horizontal and vertical axes denote the first two principle components obtained by PCA, respectively.
As shown in Figure 5, the original OSM samples were dispersive. By contrast, the derived samples were aggregated in the feature space. The distribution of the original OSM samples indicated that several classes were confused with one another seriously, especially for the samples of buildings and soils. Some OSM samples were far from the center of the corresponding class in the feature space, such as water. The derived samples had more explicit boundaries with better separability among different classes than the original OSM samples. A few building samples were mixed with soil samples due to the similar spectral characteristics between the two classes. Nevertheless, the general quality of the derived samples was considerably improved.

4.2. Utilization of Spatial Features

A comparative experiment that classified the imagery utilizing only the spectral features was conducted to verify the effect of the spatial features on the classification performance. The experiment was performed under the same conditions as the proposed framework except the utilization of spatial features. The classification maps and accuracies of the experiment are presented in Figure 6 and Table 4, respectively.
Comparison of the results presented in Figure 3 and Figure 6 indicated that most pixels were correctly classified. However, in regions II and IV, many small shadow objects were misclassified as water due to their similar spectral characteristics. Spatial features provided additional characteristics that enhanced the separability between different classes. As a result, the misidentification between shadows and water decreased considerably after the spatial features were integrated.
Table 4 shows that the OA of spectral-based classification was 83.4%, which was worse than the accuracy of 89.4% given by the spectral-spatial classification. The utilization of spatial features largely benefited the recognition of shadows and soils. Compared with the results of the spectral-based method, the accuracy increase of shadows and soils provided by the proposed method were 23.9% and 16.5%, respectively. The classification accuracies of water, forests, and grasses also improved by 2–6%.

4.3. Strategy of Road Superimposition

A comparative experiment that extracted training samples from the OSM road data instead of directly overlaying it on the classification map was carried out to demonstrate the effectiveness of the road superimposition strategy. The classification maps and accuracies are presented in Figure 7 and Table 5, respectively.
As shown in Figure 7, a mass of buildings and soils were misclassified as roads. Some pixels of water, forests, and grasses were confused with roads. This phenomenon can be attributed to the similar spectral characteristics among buildings, soils, and roads.
Comparisons between Table 3 and Table 5 indicated that the method which utilized the road superimposition strategy provided better classification performance and increased the OA by 7.6%. Specifically, the classification accuracy of buildings increased by 39.1%, and the accuracy of roads increased from 77.4% to 93.2%. The road superimposition strategy considerably reduced the severe misclassification between roads and other artificial architectures. Notably, the success of the OSM road superimposition strategy was attributed to the high completeness of the OSM roads. For the other classes of information in OSM, the superimposition strategy was unsuitable due to low completeness.

4.4. The Object-Based Strategy

The pixel-based approach and the object-based approach were widely accepted strategies for high-spatial-resolution image classification. An experiment of object-based image analysis (OBIA) was conducted for comparison. In the object-based cases, multi-resolution segmentation algorithm was used to divide the image into regions. Pixels within each region were spatially adjacent and similar in feature domains. Thus, the feature of the representative sample was regarded as the mean feature value of the pixels within each region, and the corresponding label was determined by the dominant class. The classification maps and accuracies using OBIA are presented in Figure 8 and Table 6, respectively, for comparison.
Comparisons between Figure 4 and Figure 8 showed that the classification maps became cleaner with fewer noises than before. The object-based method was advantageous because it reduced the salt and pepper noises in the classification results. As shown in Table 6, the OA of the four test regions was 89.3%—close to the accuracy of 89.4% obtained by the pixel-based method. The accuracies of buildings, water, forests, grasses, and roads were nearly equal to the accuracies using pixel-based method. From these results, we can conclude that the proposed framework is appropriate for pixel- and object-based classification. The accuracies of the two cases can be close to each other, whereas the classification maps derived by OBIA may show more homogeneity locally.

4.5. Sensitivity Analysis of Sample Numbers

In this experiment, classifications with different numbers of training samples per class were conducted using pixel- and object-based methods, respectively. The OA obtained with different amounts of training samples is presented in Figure 9. Notably, the accuracies did not fluctuate considerably with a range from 85% to 90%. The accuracy reached a peak when the number of training samples was 300. Moreover, the accuracy of the OBIA method declined more heavily than that of the pixel-based method when the number of samples was more than 300. The accuracy remained stable between 85% and 90%. Therefore, the proposed framework is inconsiderably sensitive to the number of samples.

4.6. Comparison with the State-of-the-Art Methods

Three state-of-the-art methods that use OSM data for remote sensing image classification were considered for comparison. The first method (CI method) introduced CI to assess the importance of the OSM data, and selected samples from OSM data with high CI [22]. The second method (SR method) refined the training samples derived from the OSM data using a set of techniques and superimposed the OSM road data on the classification map [23]. The third method (AS method) extracted automatic samples on the basis of multiple information indexes for remote sensing image classification [40]. The classification maps and confusion matrixes of these methods and the proposed method are presented in Figure 10 and Table 7, respectively.
Figure 10 shows that the proposed method exhibited promising performance. Specifically, although the training samples were selected from the datasets with high values of CI for the CI method, the image was still misclassified seriously. The classification maps of the SR method indicated evident confusion between buildings and soils and between forests and grasses. Furthermore, shadows were all recognized as water due to the lack of shadow samples. The classification maps of the AS method showed that most pixels were correctly classified, especially for the pixels of shadows, water, and vegetation. However, numerous pixels were misclassified as roads. In addition, grasses and forests could not be separated because they were integrally represented by the vegetation index.
By comparing the confusion matrixes of the four aforementioned methods, the following conclusions were obtained:
The quality of training samples is crucial for the performance of classification. In our experiments, the methods using refined samples (SR, AS, and the proposed method) achieved an OA of at least 64.9%, whereas the method using raw OSM data as samples (CI method) achieved an OA of only 48.6%.
Superimposing the OSM road data on the classification map is better than using them as training samples. The number of road pixels in the four test regions was 107234.The methods using road superposition (RS and the proposed method) recognized at least 102,165 pixels of roads, whereas the methods using road samples (CI and AS) recognized at most 65,183 pixels of roads.
Shadow samples are important for high-resolution remote sensing image classification. In our case, shadows covered 5–10% of the area in the test regions. The CI and SR methods discarded these shadows, whereas the AS method and the proposed method took shadows into consideration. The experimental results illustrate that the latter methods achieved higher OA than the former ones.

4.7. Discussion on the OSM Data Quality

The OSM data have gained increasing attention in land cover/land use mapping. Different strategies have been developed in accordance with the accuracy and completeness of OSM data. In regions where volunteers are active, e.g., some European countries, the quality of OSM data is as high as the proprietary data. Land cover/land use maps can be directly extracted and generated from OSM data [19,36,41]. However, for most places in the world, OSM data do not have a high accuracy or completeness. In this context, OSM data are incorporated with remote sensing image for land cover/land use mapping. Training samples can be extracted from OSM data for remote sensing image classification [21,22,23]. The inaccurate labels contributed by unprofessional volunteers hinder the image classification. Thus, it is important to collect reliable and representative samples from OSM data. Moreover, the OSM road network can be adopted to segment the study area for parcel-based land use mapping [42,43]. The performance of this strategy relies on the completeness of the OSM road data. Although OSM data quality is unsatisfactory in certain regions, it is promising that OSM information becomes more accurate and complete with increasing volunteers contributing their knowledge.

5. Conclusions

In this study, high-resolution remotely sensed imagery and OSM data were fused to obtain the land cover classification map over urban areas. The class attributes from the OSM data and multiple information indexes from imagery were integrated to extract training samples. APs were computed to model the spatial features of imagery and PCA was performed to reduce the information redundancy. On the basis of the generated training samples and extracted features, an initial classification map was obtained. An OSM road buffer with an adaptive radius was derived in consideration of road hierarchy. After superimposing the road buffer on the classification map and relabeling the overlapped pixels as the road category, the final classification result was obtained.
A high-resolution multispectral image acquired by GaoFen-2 satellite and the OSM data covering Wuhan City, China was used to test the effectiveness of the proposed framework. The experimental results illustrated that the proposed framework produced a satisfactory classification result with high accuracy. Firstly, the samples derived by the proposed method were more reliable than the raw OSM samples, given that they showed better discriminations than the original OSM data in feature space. Secondly, the integration of APs improved the classification accuracy compared with the classification approach that only utilized spectral features. Thirdly, the strategy of the OSM road superimposition effectively reduced the misclassification among buildings, soils, and roads. The proposed framework was compared with three state-of-the-art methods. Experimental results demonstrated that the proposed framework outperformed the other methods in terms of classification accuracy and visual interpretation.
In the future, we plan to conduct multi-temporal analysis on urban areas using multi-sensor images and OSM data [44,45]. Considerable attention will also be paid to the fusion of open social and remote sensing data for the analysis of economic and social issues in urban areas.

Author Contributions

Conceptualization, N.L. and T.W.; methodology, Q.L.; software, H.H.; validation, T.W., and H.H.; formal analysis, T.W.; investigation, T.W. and Q.L.; resources, N.L.; data curation, H.H.; writing—original draft preparation, H.H.; writing—review and editing, T.W. and Q.L.; visualization, H.H.; supervision, N.L. and Q.L.; project administration, N.L.; funding acquisition, N.L.

Funding

This research was funded the National Key Research and Development Program of China grant number 2018YFC0809100.

Acknowledgments

We thank the anonymous reviewers for their insights and constructive comments, which helped to improve the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deng, X.; Huang, J.; Rozelle, S.; Zhang, J.; Li, Z. Impact of urbanization on cultivated land changes in china. Land Use Policy 2015, 45, 1–7. [Google Scholar] [CrossRef]
  2. Hegazy, I.R.; Kaloop, M.R. Monitoring urban growth and land use change detection with gis and remote sensing techniques in daqahlia governorate egypt. Int. J. Sustain. Built Environ. 2015, 4, 117–124. [Google Scholar] [CrossRef]
  3. Chen, J.; Du, P.; Wu, C.; Xia, J.; Chanussot, J. Mapping urban land cover of a large area using multiple sensors multiple features. Remote Sens. 2018, 10, 872. [Google Scholar] [CrossRef]
  4. Yu, W.; Zhou, W. The spatiotemporal pattern of urban expansion in china: A comparison study of three urban megaregions. Remote Sens. 2018, 9, 45. [Google Scholar] [CrossRef]
  5. Huang, X.; Lu, Q.; Zhang, L. A multi-index learning approach for classification of high-resolution remotely sensed images over urban areas. ISPRS J. Photogramm. Remote Sens. 2014, 90, 36–48. [Google Scholar] [CrossRef]
  6. Geiß, C.; Klotz, M.; Schmitt, A.; Taubenböck, H. Object-based morphological profiles for classification of remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5952–5963. [Google Scholar] [CrossRef]
  7. Li, M.; Zang, S.; Zhang, B.; Li, S.; Wu, C. A review of remote sensing image classification techniques: The role of spatio-contextual information. Eur. J. Remote Sen. 2014, 47, 389–411. [Google Scholar] [CrossRef]
  8. Ghamisi, P.; Dalla Mura, M.; Benediktsson, J.A. A survey on spectral–spatial classification techniques based on attribute profiles. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2335–2353. [Google Scholar] [CrossRef]
  9. Yang, W.; Yin, X.; Xia, G.S. Learning high-level features for satellite image classification with limited labeled samples. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4472–4482. [Google Scholar] [CrossRef]
  10. Pelletier, C.; Valero, S.; Inglada, J.; Champion, N.; Marais Sicre, C.; Dedieu, G. Effect of training class label noise on classification performances for land cover mapping with satellite image time series. Remote Sens. 2017, 9, 173. [Google Scholar] [CrossRef]
  11. Wang, Z.; Du, B.; Zhang, L.; Zhang, L.; Jia, X. A novel semisupervised active-learning algorithm for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3071–3083. [Google Scholar] [CrossRef]
  12. Persello, C.; Bruzzone, L. Active and semisupervised learning for the classification of remote sensing images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6937–6956. [Google Scholar] [CrossRef]
  13. Tuia, D.; Volpi, M.; Copa, L.; Kanevski, M.; Muñoz-Marí, J. A survey of active learning algorithms for supervised remote sensing image classification. IEEE J. Sel. Top. Signal Process. 2011, 5, 606–617. [Google Scholar] [CrossRef]
  14. Lu, Q.; Ma, Y.; Xia, G.-S. Active learning for training sample selection in remote sensing image classification using spatial information. Remote Sens. Lett. 2017, 8, 1211–1220. [Google Scholar] [CrossRef]
  15. Maulik, U.; Chakraborty, D. Learning with transductive svm for semisupervised pixel classification of remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2013, 77, 66–78. [Google Scholar] [CrossRef]
  16. Tan, K.; Zhu, J.; Du, Q.; Wu, L.; Du, P. A novel tri-training technique for semi-supervised classification of hyperspectral images based on diversity measurement. Remote Sens. 2016, 8, 749. [Google Scholar] [CrossRef]
  17. Jokar Arsanjani, J.; Helbich, M.; Bakillah, M. Exploiting volunteered geographic information to ease land use mapping of an urban landscape. In International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Proceedings of the 29th Urban Data Management Symposium, London, UK, 29–31 May 2013; University College London: London, UK, 2013; pp. 51–55. [Google Scholar]
  18. Haklay, M.; Weber, P. Openstreetmap: User-generated street maps. IEEE Pervasive Comput. 2008, 7, 12–18. [Google Scholar] [CrossRef]
  19. Estima, J.; Painho, M. Investigating the potential of OpenStreetMap for land use/land cover production: A case study for continental Portugal. In OpenStreetMap in GIScience; Jokar Arsanjani, J., Zipf, A., Mooney, P., Helbich, M., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 273–293. [Google Scholar]
  20. Haklay, M. How good is volunteered geographical information? A comparative study of openstreetmap and ordnance survey datasets. Environ. Plann. B Plann. Des. 2010, 37, 682–703. [Google Scholar] [CrossRef]
  21. Johnson, B.A.; Iizuka, K. Integrating openstreetmap crowdsourced data and landsat time-series imagery for rapid land use/land cover (lulc) mapping: Case study of the laguna de bay area of the philippines. Appl. Geogr. 2016, 67, 140–149. [Google Scholar] [CrossRef]
  22. Geiß, C.; Schauß, A.; Riedlinger, T.; Dech, S.; Zelaya, C.; Guzmán, N.; Hube, M.A.; Arsanjani, J.J.; Taubenböck, H. Joint use of remote sensing data and volunteered geographic information for exposure estimation: Evidence from valparaíso, chile. Nat. Hazards 2016, 86, 81–105. [Google Scholar] [CrossRef]
  23. Wan, T.; Lu, H.; Lu, Q.; Luo, N. Classification of high-resolution remote-sensing image using openstreetmap information. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2305–2309. [Google Scholar] [CrossRef]
  24. Dare, P.M. Shadow analysis in high-resolution satellite imagery of urban areas. Photogramm. Eng. Remote Sens. 2005, 71, 169–177. [Google Scholar] [CrossRef]
  25. Huang, W.; Bu, M. Detecting shadows in high-resolution remote-sensing images of urban areas using spectral and spatial features. Int. J. Remote Sens. 2015, 36, 6224–6244. [Google Scholar] [CrossRef]
  26. Goward, S.N.; Markham, B.; Dye, D.G.; Dulaney, W.; Yang, J. Normalized difference vegetation index measurements from the advanced very high resolution radiometer. Remote Sens. Environ. 1991, 35, 257–277. [Google Scholar] [CrossRef]
  27. McFeeters, S.K. The use of the normalized difference water index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  28. Huang, X.; Zhang, L. A multidirectional and multiscale morphological index for automatic building extraction from multispectral geoeye-1 imagery. Photogramm. Eng. Remote Sens. 2011, 77, 721–732. [Google Scholar] [CrossRef]
  29. Huang, X.; Zhang, L. Morphological building/shadow index for building extraction from high-resolution imagery over urban areas. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2012, 5, 161–172. [Google Scholar] [CrossRef]
  30. Bhardwaj, K.; Patra, S. An unsupervised technique for optimal feature selection in attribute profiles for spectral-spatial classification of hyperspectral images. ISPRS J. Photogramm. Remote Sens. 2018, 138, 139–150. [Google Scholar] [CrossRef]
  31. Salembier Clairon, P.J.; Wilkinson, M. Connected operators: A review of region-based morphological image processing techniques. IEEE Signal Process. Mag. 2009, 26, 136–157. [Google Scholar] [CrossRef]
  32. Dalla Mura, M.; Benediktsson, J.A.; Waske, B.; Bruzzone, L. Morphological attribute profiles for the analysis of very high resolution images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3747–3762. [Google Scholar] [CrossRef]
  33. Ghamisi, P.; Benediktsson, J.A.; Sveinsson, J.R. Automatic spectral–spatial classification framework based on attribute profiles and supervised feature extraction. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5771–5782. [Google Scholar] [CrossRef]
  34. Barrington-Leigh, C.; Millard-Ball, A. The world’s user-generated road map is more than 80% complete. PLoS ONE 2017, 12, e0180698. [Google Scholar] [CrossRef] [PubMed]
  35. Neis, P.; Zipf, A. Analyzing the contributor activity of a volunteered geographic information project—The case of openstreetmap. ISPRS Int. J. Geoinf. 2012, 1, 146–165. [Google Scholar] [CrossRef]
  36. Jokar Arsanjani, J.; Helbich, M.; Bakillah, M.; Hagenauer, J.; Zipf, A. Toward mapping land-use patterns from volunteered geographic information. Int. J. Geogr. Inf. Sci. 2013, 27, 2264–2278. [Google Scholar] [CrossRef]
  37. Wang, H.; Wang, C.; Wu, H. Using GF-2 imagery and the conditional random field model for urban forest cover mapping. Remote Sens. Lett 2016, 7, 378–387. [Google Scholar] [CrossRef]
  38. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  39. Powers, D.M.W. Evaluation: From Precision, Recall and F-measure to ROC, Informedness, Markedness & Correlation. J. Mach. Learn. Technol. 2011, 2, 37–63. [Google Scholar]
  40. Huang, X.; Weng, C.; Lu, Q.; Feng, T.; Zhang, L. Automatic labelling and selection of training samples for high-resolution remote sensing image classification over urban areas. Remote Sens. 2015, 7, 16024–16044. [Google Scholar] [CrossRef]
  41. Vaz, E.; Jokar Arsanjani, J. Crowdsourced mapping of land use in urban dense environments: An assessment of Toronto. Can. Geogr./Le Géogr. Can. 2015, 59, 246–255. [Google Scholar] [CrossRef] [Green Version]
  42. Grippa, T.; Georganos, S.; Zarougui, S.; Bognounou, P.; Diboulo, E.; Forget, Y.; Lennert, M.; Vanhuysse, S.; Mboga, N.; Wolff, E. Mapping urban land use at street block level using openstreetmap, remote sensing data, and spatial metrics. ISPRS Int. J. Geoinf. 2018, 7, 246. [Google Scholar] [CrossRef]
  43. Hu, T.; Yang, J.; Li, X.; Gong, P. Mapping urban land use by using landsat images and open social data. Remote Sens. 2016, 8, 151. [Google Scholar] [CrossRef]
  44. Tang, Y.; Zhang, L. Urban change analysis with multi-sensor multispectral imagery. Remote Sens. 2017, 9, 252. [Google Scholar] [CrossRef]
  45. Luo, H.; Liu, C.; Wu, C.; Guo, X. Urban change detection based on dempster–shafer theory for multitemporal very high-resolution imagery. Remote Sens. 2018, 10, 980. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed framework.
Figure 1. Flowchart of the proposed framework.
Remotesensing 11 00088 g001
Figure 2. GaoFen-2 image of the study area.
Figure 2. GaoFen-2 image of the study area.
Remotesensing 11 00088 g002
Figure 3. Images (left) and ground truth annotations (right) of the four test regions (yellow = buildings, blue = water, dark green = forests, light green = grasses, pink = roads, orange = soils, and black = shadows).
Figure 3. Images (left) and ground truth annotations (right) of the four test regions (yellow = buildings, blue = water, dark green = forests, light green = grasses, pink = roads, orange = soils, and black = shadows).
Remotesensing 11 00088 g003aRemotesensing 11 00088 g003b
Figure 4. Classification maps of the four test regions provided by the proposed method (yellow = buildings, blue = water, dark green = forests, light green = grasses, pink = roads, orange = soils, and black = shadows).
Figure 4. Classification maps of the four test regions provided by the proposed method (yellow = buildings, blue = water, dark green = forests, light green = grasses, pink = roads, orange = soils, and black = shadows).
Remotesensing 11 00088 g004aRemotesensing 11 00088 g004b
Figure 5. Comparison between the class distribution of the original OpenStreetMap (OSM) samples (left) and the samples generated by the proposed method (right) (buildings = red, water = blue, forests = cyan, grasses = green, soils = black).
Figure 5. Comparison between the class distribution of the original OpenStreetMap (OSM) samples (left) and the samples generated by the proposed method (right) (buildings = red, water = blue, forests = cyan, grasses = green, soils = black).
Remotesensing 11 00088 g005
Figure 6. Classification maps of the four test regions provided by the spectral-based approach (yellow = buildings, blue = water, dark green = forests, light green = grasses, pink = roads, orange = soils, and black = shadows).
Figure 6. Classification maps of the four test regions provided by the spectral-based approach (yellow = buildings, blue = water, dark green = forests, light green = grasses, pink = roads, orange = soils, and black = shadows).
Remotesensing 11 00088 g006
Figure 7. Classification maps of the four test regions provided by the approach that used roads as training samples (yellow = buildings, blue = water, dark green = forests, light green = grasses, pink = roads, orange = soils, and black = shadows).
Figure 7. Classification maps of the four test regions provided by the approach that used roads as training samples (yellow = buildings, blue = water, dark green = forests, light green = grasses, pink = roads, orange = soils, and black = shadows).
Remotesensing 11 00088 g007aRemotesensing 11 00088 g007b
Figure 8. Classification maps of the four test regions provided by the object-based approach (yellow = buildings, blue = water, dark green = forests, light green = grasses, pink = roads, orange = soils, and black = shadows).
Figure 8. Classification maps of the four test regions provided by the object-based approach (yellow = buildings, blue = water, dark green = forests, light green = grasses, pink = roads, orange = soils, and black = shadows).
Remotesensing 11 00088 g008
Figure 9. Accuracies with different numbers of training samples.
Figure 9. Accuracies with different numbers of training samples.
Remotesensing 11 00088 g009
Figure 10. Classification maps of different methods in the test regions.
Figure 10. Classification maps of different methods in the test regions.
Remotesensing 11 00088 g010
Table 1. Determined radii of road buffer with different hierarchies.
Table 1. Determined radii of road buffer with different hierarchies.
HierarchyRecommended Width Range (m)k ValueRadius of Road Buffer (m)
Primary23~45624
Secondary4.5~12312
Tertiary4.5~1228
Others4.5~1214
Table 2. Number of testing samples for each class in the test regions.
Table 2. Number of testing samples for each class in the test regions.
RegionBuildingsWaterForestsGrassesRoadsSoilsShadows
I62,43143,56328,85541,77627,328821514,600
II43,98531,83016,32327,03535,476523636,822
III15,819161,08417,27942,75922,68913,7229074
IV30,23459,72517,44648,98421,741285930,966
All152,469296,20279,903160,554107,23430,03291,462
Table 3. Classification accuracies of the proposed method in the four test regions in terms of overall accuracy (OA), Kappa coefficient, and F1-score for each class.
Table 3. Classification accuracies of the proposed method in the four test regions in terms of overall accuracy (OA), Kappa coefficient, and F1-score for each class.
RegionOAKappaBuildingsWaterForestsGrassesRoadsSoilsShadows
I88.5%0.860190.5%97.8%80.0%81.9%96.6%81.9%78.7%
II88.6%0.863782.2%95.6%82.8%92.1%96.8%73.2%83.6%
III91.8%0.872775.1%98.6%78.7%89.4%90.7%76.8%70.0%
IV87.7%0.848881.3%95.7%75.7%89.4%85.7%79.2%86.5%
All89.4%0.868584.7%97.6%79.2%87.9%93.2%77.9%82.1%
Table 4. The accuracies of the spectral-based classification approach in the four test regions in terms of overall accuracy, Kappa coefficient, and F1-score for each class.
Table 4. The accuracies of the spectral-based classification approach in the four test regions in terms of overall accuracy, Kappa coefficient, and F1-score for each class.
RegionOAKappaBuildingsWaterForestsGrassesRoadsSoilsShadows
I86.1%0.828991.2%91.4%79.4%82.9%96.7%61.4%59.1%
II79.4%0.752377.9%78.2%81.4%87.5%97.1%42.7%61.1%
III89.1%0.829674.7%96.6%73.4%87.8%95.9%69.7%39.7%
IV77.0%0.716672.8%88.1%61.8%76.7%90.1%56.7%61.5%
All83.4%0.793882.2%91.9%73.9%83.4%95.3%61.4%58.2%
Table 5. Classification accuracies of the approach that used roads as training samples in the four test regions in terms of overall accuracy, Kappa coefficient, and F1-score for each class.
Table 5. Classification accuracies of the approach that used roads as training samples in the four test regions in terms of overall accuracy, Kappa coefficient, and F1-score for each class.
RegionOAKappaBuildingsWaterForestsGrassesRoadsSoilsShadows
I80.1%0.759863.0%95.7%72.9%94.9%79.9%55.6%91.9%
II75.0%0.700342.3%95.6%77.6%98.6%76.3%39.0%81.5%
III88.0%0.811830.0%97.6%78.2%98.2%69.6%50.8%91.8%
IV84.0%0.803347.1%91.5%87.1%92.0%83.6%58.0%93.8%
All81.8%0.768845.6%95.1%79.0%96.0%77.4%50.9%89.8%
Table 6. Classification accuracies of the object-based approach in the four test regions in terms of overall accuracy, Kappa coefficient, and F1-score for each class.
Table 6. Classification accuracies of the object-based approach in the four test regions in terms of overall accuracy, Kappa coefficient, and F1-score for each class.
RegionOAKappaBuildingsWaterForestsGrassesRoadsSoilsShadows
I87.8%0.851887.9%97.7%84.4%84.2%91.5%70.0%82.5%
II87.1%0.845680.9%95.5%78.6%90.4%97.1%65.6%81.9%
III94.1%0.907275.4%99.1%84.9%92.6%95.9%76.7%81.5%
IV86.5%0.835381.1%94.6%72.4%85.7%91.6%63.1%86.6%
All89.3%0.867483.2%97.6%80.1%88.0%94.3%71.3%83.6%
Table 7. Confusion matrixes of (a) CI, (b) SR, (c) AS, and (d) the proposed method.
Table 7. Confusion matrixes of (a) CI, (b) SR, (c) AS, and (d) the proposed method.
(a) OA = 48.6%, Kappa = 0.3777
BuildingsWaterForestsGrassesRoadsSoilsShadows
Buildings104,01383,25583575241,645629143,964
Water1344132,67313,56826,15298012623964
Forests3804611137,881386158321027290
Grasses278918110,173107,7936772467153
Roads29,70473,64617,04118,27643,981136235,320
Soils10,28777344352910,79220,427410
Shadows0000000
(b) OA = 64.9%, Kappa = 0.5676
BuildingsWaterForestsGrassesRoadsSoilsShadows
Buildings47,9382121869816125
Water40,633282,58545937390139041,649
Forests533912,38974,86388,7519014246,785
Grasses25434117558,01852843
Roads4549402057783106,1638061553
Soils57,46977127612,445305827,335997
Shadows0000000
(c) OA = 71.2%, Kappa = 0.6423
BuildingsWaterForestsGrassesRoadsSoilsShadows
Buildings85,8332510547620,43847981085
Water798268,397151536705594
Forests0000000
Grasses4280113873,119149,96614,02416071409
Roads54,77344801221264065,183909410,936
Soils3138834288436113,87755
Shadows343922,02052397123591063072,328
(d) OA = 89.4%, Kappa = 0.8685
BuildingsWaterForestsGrassesRoadsSoilsShadows
Buildings120,240789538856407326462516
Water1182285,60564330353182014
Forests2433170867,07217,1533815832
Grasses63102179047141,28613571999799
Roads57394952720102,1658341600
Soils2736711210162922,274310
Shadows18,7826451289243231223083,333

Share and Cite

MDPI and ACS Style

Luo, N.; Wan, T.; Hao, H.; Lu, Q. Fusing High-Spatial-Resolution Remotely Sensed Imagery and OpenStreetMap Data for Land Cover Classification Over Urban Areas. Remote Sens. 2019, 11, 88. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11010088

AMA Style

Luo N, Wan T, Hao H, Lu Q. Fusing High-Spatial-Resolution Remotely Sensed Imagery and OpenStreetMap Data for Land Cover Classification Over Urban Areas. Remote Sensing. 2019; 11(1):88. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11010088

Chicago/Turabian Style

Luo, Nianxue, Taili Wan, Huaixu Hao, and Qikai Lu. 2019. "Fusing High-Spatial-Resolution Remotely Sensed Imagery and OpenStreetMap Data for Land Cover Classification Over Urban Areas" Remote Sensing 11, no. 1: 88. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11010088

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop