remotesensing-logo

Journal Browser

Journal Browser

Drone Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Biogeosciences Remote Sensing".

Deadline for manuscript submissions: closed (31 December 2019) | Viewed by 60707

Special Issue Editor

Department of Ecology and Evolutionary Biology, Institute at Brown for Environment and Society, Brown University, 85 Waterman Street, Providence, RI 02912, USA
Interests: calibration and validation; ecology; Global Ecosystem Dynamics Investigation; imaging spectroscopy; instrument characterization; LiDAR; machine learning; solar-induced fluorescence; unmanned aerial vehicles

Special Issue Information

Dear Colleagues,

Remote sensing from the air and space has greatly advanced our understanding of the physical and biological elements of the Earth system, including stocks and fluxes of carbon and water, agricultural productivity, how vegetation responds to changes in the environment, atmospheric aerosol distributions and weather monitoring. This Special Issue will address recent advances in drone remote sensing. Submissions that describe new insights into unresolved problems that are enabled by observations at novel scales of space and time are particularly encouraged, as are submissions that demonstrate observations in novel environments. Manuscripts are welcome from all unmanned platforms, including commercial and government-funded systems. These submissions may include, but are not limited to:

  • Short-term physiologial responess of vegetation to changes in the environment (minutes to hours)
  • Targeted acquisitions to take advantage ot natural experiments, such as phenology, natural disasters, or ice melting
  • Observations in challenging or dangerous environments, including volcanic plumes, eruptions, smoke, and long-duration flights to remote locations or hurricanes.
  • Aerosol measurement, weather reconnaissance
  • Characterization of plant or animal distribution and habitat
  • Instrument characteriation, calibration, and validation.
  • Observations in support of calibration and validation activities of current and forthcoming space missions.
  • Solar-induced fluorescence
  • Multitemporal remote sensing

In the manuscript cover letter, authors should specifically articulate the opportunities or understanding enabled by unmanned observations that is not achievable using traditional (piloted) high-altitude airborne or satellite platforms.

Dr. James R. Kellner
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • drone
  • unmanned aerial vehicle
  • UAV
  • UAS

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

25 pages, 11135 KiB  
Article
A Method for Dehazing Images Obtained from Low Altitudes during High-Pressure Fronts
by Damian Wierzbicki, Michal Kedzierski and Aleksandra Sekrecka
Remote Sens. 2020, 12(1), 25; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12010025 - 19 Dec 2019
Cited by 13 | Viewed by 3410
Abstract
Unmanned aerial vehicles (UAVs) equipped with compact digital cameras and multi-spectral sensors are used in remote sensing applications and environmental studies. Recently, due to the reduction of costs of these types of system, the increase in their reliability, and the possibility of image [...] Read more.
Unmanned aerial vehicles (UAVs) equipped with compact digital cameras and multi-spectral sensors are used in remote sensing applications and environmental studies. Recently, due to the reduction of costs of these types of system, the increase in their reliability, and the possibility of image acquisition with very high spatial resolution, low altitudes imaging is used in many qualitative and quantitative analyses in remote sensing. Also, there has been an enormous development in the processing of images obtained with UAV platforms. Until now, research on UAV imaging has focused mainly on aspects of geometric and partially radiometric correction. And consideration of the effects of low atmosphere and haze on images has so far been neglected due to the low operating altitudes of UAVs. However, it proved to be the case that the path of sunlight passing through various layers of the low atmosphere causes refraction and causes incorrect registration of reflection by the imaging sensor. Images obtained from low altitudes may be degraded due to the scattering process caused by fog and weather conditions. These negative atmospheric factors cause a reduction in contrast and colour reproduction in the image, thereby reducing its radiometric quality. This paper presents a method of dehazing images acquired with UAV platforms. As part of the research, a methodology for imagery acquisition from a low altitude was introduced, and methods of atmospheric calibration based on the atmosphere scattering model were presented. Moreover, a modified dehazing model using Wiener’s adaptive filter was presented. The accuracy assessment of the proposed dehazing method was made using qualitative indices such as structural similarity (SSIM), peak signal to noise ratio (PSNR), root mean square error (RMSE), Correlation Coefficient, Universal Image Quality Index (Q index) and Entropy. The experimental results showed that using the proposed dehazing method allowed the removal of the negative impact of haze and improved image quality, based on the PSNR index, even by an average of 34% compared to other similar methods. The obtained results show that our approach allows processing of the images to remove the negative impact of the low atmosphere. Thanks to this technique, it is possible to obtain a dehazing effect on images acquired at high humidity and radiation fog. The results from this study can provide better quality images for remote sensing analysis. Full article
(This article belongs to the Special Issue Drone Remote Sensing)
Show Figures

Graphical abstract

23 pages, 37971 KiB  
Article
Multiple-Object-Tracking Algorithm Based on Dense Trajectory Voting in Aerial Videos
by Tao Yang, Dongdong Li, Yi Bai, Fangbing Zhang, Sen Li, Miao Wang, Zhuoyue Zhang and Jing Li
Remote Sens. 2019, 11(19), 2278; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11192278 - 29 Sep 2019
Cited by 9 | Viewed by 4620
Abstract
In recent years, UAV technology has developed rapidly. Due to the mobility, low cost, and variable monitoring altitude of UAVs, multiple-object detection and tracking in aerial videos has become a research hotspot in the field of computer vision. However, due to camera motion, [...] Read more.
In recent years, UAV technology has developed rapidly. Due to the mobility, low cost, and variable monitoring altitude of UAVs, multiple-object detection and tracking in aerial videos has become a research hotspot in the field of computer vision. However, due to camera motion, small target size, target adhesion, and unpredictable target motion, it is still difficult to detect and track targets of interest in aerial videos, especially in the case of a low frame rate where the target position changes too much. In this paper, we propose a multiple-object-tracking algorithm based on dense-trajectory voting in aerial videos. The method models the multiple-target-tracking problem as a voting problem of the dense-optical-flow trajectory to the target ID, which can be applied to aerial-surveillance scenes and is robust to low-frame-rate videos. More specifically, we first built an aerial video dataset for vehicle targets, including a training dataset and a diverse test dataset. Based on this, we trained the neural network model by using a deep-learning method to detect vehicles in aerial videos. Thereafter, we calculated the dense optical flow in adjacent frames, and generated effective dense-optical-flow trajectories in each detection bounding box at the current time. When target IDs of optical-flow trajectories are known, the voting results of the optical-flow trajectories in each detection bounding box are counted. Finally, similarity between detection objects in adjacent frames was measured based on the voting results, and tracking results were obtained by data association. In order to evaluate the performance of this algorithm, we conducted experiments on self-built test datasets. A large number of experimental results showed that the proposed algorithm could obtain good target-tracking results in various complex scenarios, and performance was still robust at a low frame rate by changing the video frame rate. In addition, we carried out qualitative and quantitative comparison experiments between the algorithm and three state-of-the-art tracking algorithms, which further proved that this algorithm could not only obtain good tracking results in aerial videos with a normal frame rate, but also had excellent performance under low-frame-rate conditions. Full article
(This article belongs to the Special Issue Drone Remote Sensing)
Show Figures

Graphical abstract

16 pages, 4757 KiB  
Article
Drone Based Quantification of Channel Response to an Extreme Flood for a Piedmont Stream
by George Heritage and Neil Entwistle
Remote Sens. 2019, 11(17), 2031; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11172031 - 29 Aug 2019
Cited by 7 | Viewed by 3079
Abstract
The influence of extreme floods on the form and functioning of upland systems has concentrated on the erosive impact of these flows. They are seen to be highly competent with coarse sediment transport rates limited by upstream supply and moderated by the ‘blanketing’ [...] Read more.
The influence of extreme floods on the form and functioning of upland systems has concentrated on the erosive impact of these flows. They are seen to be highly competent with coarse sediment transport rates limited by upstream supply and moderated by the ‘blanketing’ effect of an armour layer. This study investigates the effect of extreme events on the upland sediment cascade subjected to a recent extreme rainfall-induced flood event. The drone-based survey generated orthophotography and a DEM surface, which was compared with historic LiDAR data. This allowed erosion and deposition to be quantified and the surface micro-variation used to characterise stable and mobile sediment. The idealised model of sediment residence time increasing downstream is questioned by the findings of this study as relatively little coarse bedload sediment appears to have been transferred downstream in favour of initial local channel erosion (moderated by legacy large sediment), mid-reach palaeo-channel reactivation, sub-channel infilling and downstream deposition of the majority of mobilised sediment across berm and bar surfaces within the active inset channel margins. Channel margin erosion was largely limited to fine sediment stripping moderated by the re-exposure of post-glacial sediment. Only a weak relationship was found between local channel slope and deposition, with storage linked more to the presence of inset berm and bar areas within the inset active channel. Downstream fining of sediment is apparent as is a strong contrast between coarser active sub-channels and finer bar and berm areas. Full article
(This article belongs to the Special Issue Drone Remote Sensing)
Show Figures

Graphical abstract

19 pages, 3267 KiB  
Article
A Protocol for Aerial Survey in Coastal Areas Using UAS
by Michaela Doukari, Marios Batsaris, Apostolos Papakonstantinou and Konstantinos Topouzelis
Remote Sens. 2019, 11(16), 1913; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11161913 - 16 Aug 2019
Cited by 38 | Viewed by 4724
Abstract
Aerial surveys in coastal areas using Unmanned Aerial Vehicles (UAVs) present many limitations. However, the need for detailed and accurate information in a marine environment has made UAVs very popular. The aim of this paper is to present a protocol which summarizes the [...] Read more.
Aerial surveys in coastal areas using Unmanned Aerial Vehicles (UAVs) present many limitations. However, the need for detailed and accurate information in a marine environment has made UAVs very popular. The aim of this paper is to present a protocol which summarizes the parameters that affect the reliability of the data acquisition process over the marine environment using Unmanned Aerial Systems (UAS). The proposed UAS Data Acquisition Protocol consists of three main categories: (i) Morphology of the study area, (ii) Environmental conditions, (iii) Flight parameters. These categories include the parameters prevailing in the study area during a UAV mission and affect the quality of marine data. Furthermore, a UAS toolbox, which combines forecast weather data values with predefined thresholds and calculates the optimal flight window times in a day, was developed. The UAS toolbox was tested in two case studies with data acquisition over a coastal study area. The first UAS survey was operated under optimal conditions while the second was realized under non-optimal conditions. The acquired images and the produced orthophoto maps from both surveys present significant differences in quality. Moreover, a comparison between the classified maps of the case studies showed the underestimation of some habitats in the area at the non-optimal survey day. The UAS toolbox is expected to contribute to proper flight planning in marine applications. The UAS protocol can provide valuable information for mapping, monitoring, and management of the coastal and marine environment, which can be used globally in research and a variety of marine applications. Full article
(This article belongs to the Special Issue Drone Remote Sensing)
Show Figures

Figure 1

28 pages, 25877 KiB  
Article
An Adaptive Framework for Multi-Vehicle Ground Speed Estimation in Airborne Videos
by Jing Li, Shuo Chen, Fangbing Zhang, Erkang Li, Tao Yang and Zhaoyang Lu
Remote Sens. 2019, 11(10), 1241; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11101241 - 24 May 2019
Cited by 37 | Viewed by 5935
Abstract
With the rapid development of unmanned aerial vehicles (UAVs), UAV-based intelligent airborne surveillance systems represented by real-time ground vehicle speed estimation have attracted wide attention from researchers. However, there are still many challenges in extracting speed information from UAV videos, including the dynamic [...] Read more.
With the rapid development of unmanned aerial vehicles (UAVs), UAV-based intelligent airborne surveillance systems represented by real-time ground vehicle speed estimation have attracted wide attention from researchers. However, there are still many challenges in extracting speed information from UAV videos, including the dynamic moving background, small target size, complicated environment, and diverse scenes. In this paper, we propose a novel adaptive framework for multi-vehicle ground speed estimation in airborne videos. Firstly, we build a traffic dataset based on UAV. Then, we use the deep learning detection algorithm to detect the vehicle in the UAV field of view and obtain the trajectory in the image through the tracking-by-detection algorithm. Thereafter, we present a motion compensation method based on homography. This method obtains matching feature points by an optical flow method and eliminates the influence of the detected target to accurately calculate the homography matrix to determine the real motion trajectory in the current frame. Finally, vehicle speed is estimated based on the mapping relationship between the pixel distance and the actual distance. The method regards the actual size of the car as prior information and adaptively recovers the pixel scale by estimating the vehicle size in the image; it then calculates the vehicle speed. In order to evaluate the performance of the proposed system, we carry out a large number of experiments on the AirSim Simulation platform as well as real UAV aerial surveillance experiments. Through quantitative and qualitative analysis of the simulation results and real experiments, we verify that the proposed system has a unique ability to detect, track, and estimate the speed of ground vehicles simultaneously even with a single downward-looking camera. Additionally, the system can obtain effective and accurate speed estimation results, even in various complex scenes. Full article
(This article belongs to the Special Issue Drone Remote Sensing)
Show Figures

Graphical abstract

19 pages, 5549 KiB  
Article
Evaluation of Fire Severity Indices Based on Pre- and Post-Fire Multispectral Imagery Sensed from UAV
by Fernando Carvajal-Ramírez, José Rafael Marques da Silva, Francisco Agüera-Vega, Patricio Martínez-Carricondo, João Serrano and Francisco Jesús Moral
Remote Sens. 2019, 11(9), 993; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11090993 - 26 Apr 2019
Cited by 50 | Viewed by 6133
Abstract
Fire severity is a key factor for management of post-fire vegetation regeneration strategies because it quantifies the impact of fire, describing the amount of damage. Several indices have been developed for estimation of fire severity based on terrestrial observation by satellite imagery. In [...] Read more.
Fire severity is a key factor for management of post-fire vegetation regeneration strategies because it quantifies the impact of fire, describing the amount of damage. Several indices have been developed for estimation of fire severity based on terrestrial observation by satellite imagery. In order to avoid the implicit limitations of this kind of data, this work employed an Unmanned Aerial Vehicle (UAV) carrying a high-resolution multispectral sensor including green, red, near-infrared, and red edge bands. Flights were carried out pre- and post-controlled fire in a Mediterranean forest. The products obtained from the UAV-photogrammetric projects based on the Structure from Motion (SfM) algorithm were a Digital Surface Model (DSM) and multispectral images orthorectified in both periods and co-registered in the same absolute coordinate system to find the temporal differences (d) between pre- and post-fire values of the Excess Green Index (EGI), Normalized Difference Vegetation Index (NDVI), and Normalized Difference Red Edge (NDRE) index. The differences of indices (dEGI, dNDVI, and dNDRE) were reclassified into fire severity classes, which were compared with the reference data identified through the in situ fire damage location and Artificial Neural Network classification. Applying an error matrix analysis to the three difference of indices, the overall Kappa accuracies of the severity maps were 0.411, 0.563, and 0.211 and the Cramer’s Value statistics were 0.411, 0.582, and 0.269 for dEGI, dNDVI, and dNDRE, respectively. The chi-square test, used to compare the average of each severity class, determined that there were no significant differences between the three severity maps, with a 95% confidence level. It was concluded that dNDVI was the index that best estimated the fire severity according to the UAV flight conditions and sensor specifications. Full article
(This article belongs to the Special Issue Drone Remote Sensing)
Show Figures

Graphical abstract

18 pages, 6011 KiB  
Article
A Novel Tilt Correction Technique for Irradiance Sensors and Spectrometers On-Board Unmanned Aerial Vehicles
by Juha Suomalainen, Teemu Hakala, Raquel Alves de Oliveira, Lauri Markelin, Niko Viljanen, Roope Näsi and Eija Honkavaara
Remote Sens. 2018, 10(12), 2068; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10122068 - 19 Dec 2018
Cited by 26 | Viewed by 5933
Abstract
In unstable atmospheric conditions, using on-board irradiance sensors is one of the only robust methods to convert unmanned aerial vehicle (UAV)-based optical remote sensing data to reflectance factors. Normally, such sensors experience significant errors due to tilting of the UAV, if not installed [...] Read more.
In unstable atmospheric conditions, using on-board irradiance sensors is one of the only robust methods to convert unmanned aerial vehicle (UAV)-based optical remote sensing data to reflectance factors. Normally, such sensors experience significant errors due to tilting of the UAV, if not installed on a stabilizing gimbal. Unfortunately, such gimbals of sufficient accuracy are heavy, cumbersome, and cannot be installed on all UAV platforms. In this paper, we present the FGI Aerial Image Reference System (FGI AIRS) developed at the Finnish Geospatial Research Institute (FGI) and a novel method for optical and mathematical tilt correction of the irradiance measurements. The FGI AIRS is a sensor unit for UAVs that provides the irradiance spectrum, Real Time Kinematic (RTK)/Post Processed Kinematic (PPK) GNSS position, and orientation for the attached cameras. The FGI AIRS processes the reference data in real time for each acquired image and can send it to an on-board or on-cloud processing unit. The novel correction method is based on three RGB photodiodes that are tilted 10° in opposite directions. These photodiodes sample the irradiance readings at different sensor tilts, from which reading of a virtual horizontal irradiance sensor is calculated. The FGI AIRS was tested, and the method was shown to allow on-board measurement of irradiance at an accuracy better than ±0.8% at UAV tilts up to 10° and ±1.2% at tilts up to 15°. In addition, the accuracy of FGI AIRS to produce reflectance-factor-calibrated aerial images was compared against the traditional methods. In the unstable weather conditions of the experiment, both the FGI AIRS and the on-ground spectrometer were able to produce radiometrically accurate and visually pleasing orthomosaics, while the reflectance reference panels and the on-board irradiance sensor without stabilization or tilt correction both failed to do so. The authors recommend the implementation of the proposed tilt correction method in all future UAV irradiance sensors if they are not to be installed on a gimbal. Full article
(This article belongs to the Special Issue Drone Remote Sensing)
Show Figures

Graphical abstract

28 pages, 9108 KiB  
Article
Freshwater Fish Habitat Complexity Mapping Using Above and Underwater Structure-From-Motion Photogrammetry
by Margaret Kalacska, Oliver Lucanus, Leandro Sousa, Thiago Vieira and Juan Pablo Arroyo-Mora
Remote Sens. 2018, 10(12), 1912; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10121912 - 29 Nov 2018
Cited by 31 | Viewed by 8703
Abstract
Substrate complexity is strongly related to biodiversity in aquatic habitats. We illustrate a novel framework, based on Structure-from-Motion photogrammetry (SfM) and Multi-View Stereo (MVS) photogrammetry, to quantify habitat complexity in freshwater ecosystems from Unmanned Aerial Vehicle (UAV) and underwater photography. We analysed sites [...] Read more.
Substrate complexity is strongly related to biodiversity in aquatic habitats. We illustrate a novel framework, based on Structure-from-Motion photogrammetry (SfM) and Multi-View Stereo (MVS) photogrammetry, to quantify habitat complexity in freshwater ecosystems from Unmanned Aerial Vehicle (UAV) and underwater photography. We analysed sites in the Xingu river basin, Brazil, to reconstruct the 3D structure of the substrate and identify and map habitat classes important for maintaining fish assemblage biodiversity. From the digital models we calculated habitat complexity metrics including rugosity, slope and 3D fractal dimension. The UAV based SfM-MVS products were generated at a ground sampling distance (GSD) of 1.20–2.38 cm while the underwater photography produced a GSD of 1 mm. Our results show how these products provide spatially explicit complexity metrics, which are more comprehensive than conventional arbitrary cross sections. Shallow neural network classification of SfM-MVS products of substrate exposed in the dry season resulted in high accuracies across classes. UAV and underwater SfM-MVS is robust for quantifying freshwater habitat classes and complexity and should be chosen whenever possible over conventional methods (e.g., chain-and-tape) because of the repeatability, scalability and multi-dimensional nature of the products. The SfM-MVS products can be used to identify high priority freshwater sectors for conservation, species occurrences and diversity studies to provide a broader indication for overall fish species diversity and provide repeatability for monitoring change over time. Full article
(This article belongs to the Special Issue Drone Remote Sensing)
Show Figures

Graphical abstract

18 pages, 6249 KiB  
Article
Quantification of Extent, Density, and Status of Aquatic Reed Beds Using Point Clouds Derived from UAV–RGB Imagery
by Nicolás Corti Meneses, Florian Brunner, Simon Baier, Juergen Geist and Thomas Schneider
Remote Sens. 2018, 10(12), 1869; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10121869 - 23 Nov 2018
Cited by 17 | Viewed by 4169
Abstract
Quantification of reed coverage and vegetation status is fundamental for monitoring and developing lake conservation strategies. The applicability of Unmanned Aerial Vehicles (UAV) three-dimensional data (point clouds) for status evaluation was investigated. This study focused on mapping extent, density, and vegetation status of [...] Read more.
Quantification of reed coverage and vegetation status is fundamental for monitoring and developing lake conservation strategies. The applicability of Unmanned Aerial Vehicles (UAV) three-dimensional data (point clouds) for status evaluation was investigated. This study focused on mapping extent, density, and vegetation status of aquatic reed beds. Point clouds were calculated with Structure from Motion (SfM) algorithms in aerial imagery recorded with Rotary Wing (RW) and Fixed Wing (FW) UAV. Extent was quantified by measuring the surface between frontline and shoreline. Density classification was based on point geometry (height and height variance) in point clouds. Spectral information per point was used for calculating a vegetation index and was used as indicator for vegetation vitality. Status was achieved by combining data on density, vitality, and frontline shape outputs. Field observations in areas of interest (AOI) and optical imagery were used for reference and validation purposes. A root mean square error (RMSE) of 1.58 m to 3.62 m for cross sections from field measurements and classification was achieved for extent map. The overall accuracy (OA) acquired for density classification was 88.6% (Kappa = 0.8). The OA for status classification of 83.3% (Kappa = 0.7) was reached by comparison with field measurements complemented by secondary Red, Green, Blue (RGB) data visual assessments. The research shows that complex transitional zones (water–vegetation–land) can be assessed and support the suitability of the applied method providing new strategies for monitoring aquatic reed bed using low-cost UAV imagery. Full article
(This article belongs to the Special Issue Drone Remote Sensing)
Show Figures

Graphical abstract

Other

Jump to: Research

18 pages, 8097 KiB  
Technical Note
Land Cover Classification from fused DSM and UAV Images Using Convolutional Neural Networks
by Husam A. H. Al-Najjar, Bahareh Kalantar, Biswajeet Pradhan, Vahideh Saeidi, Alfian Abdul Halin, Naonori Ueda and Shattri Mansor
Remote Sens. 2019, 11(12), 1461; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11121461 - 20 Jun 2019
Cited by 126 | Viewed by 9557
Abstract
In recent years, remote sensing researchers have investigated the use of different modalities (or combinations of modalities) for classification tasks. Such modalities can be extracted via a diverse range of sensors and images. Currently, there are no (or only a few) studies that [...] Read more.
In recent years, remote sensing researchers have investigated the use of different modalities (or combinations of modalities) for classification tasks. Such modalities can be extracted via a diverse range of sensors and images. Currently, there are no (or only a few) studies that have been done to increase the land cover classification accuracy via unmanned aerial vehicle (UAV)–digital surface model (DSM) fused datasets. Therefore, this study looks at improving the accuracy of these datasets by exploiting convolutional neural networks (CNNs). In this work, we focus on the fusion of DSM and UAV images for land use/land cover mapping via classification into seven classes: bare land, buildings, dense vegetation/trees, grassland, paved roads, shadows, and water bodies. Specifically, we investigated the effectiveness of the two datasets with the aim of inspecting whether the fused DSM yields remarkable outcomes for land cover classification. The datasets were: (i) only orthomosaic image data (Red, Green and Blue channel data), and (ii) a fusion of the orthomosaic image and DSM data, where the final classification was performed using a CNN. CNN, as a classification method, is promising due to hierarchical learning structure, regulating and weight sharing with respect to training data, generalization, optimization and parameters reduction, automatic feature extraction and robust discrimination ability with high performance. The experimental results show that a CNN trained on the fused dataset obtains better results with Kappa index of ~0.98, an average accuracy of 0.97 and final overall accuracy of 0.98. Comparing accuracies between the CNN with DSM result and the CNN without DSM result for the overall accuracy, average accuracy and Kappa index revealed an improvement of 1.2%, 1.8% and 1.5%, respectively. Accordingly, adding the heights of features such as buildings and trees improved the differentiation between vegetation specifically where plants were dense. Full article
(This article belongs to the Special Issue Drone Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop