remotesensing-logo

Journal Browser

Journal Browser

Unmanned Aerial Vehicles for Photogrammetry

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (3 June 2022) | Viewed by 22725

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Geospatial Engineering and Geodesy, Faculty of Civil Engineering and Geodesy, Military University of Technology, 00-908 Warsaw, Poland
Interests: photogrammetry; remote sensing; UAV; dense image matching; deep learning; image quality; image classification
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Navigation, Military University of Aviation, 08-521 Dęblin, Poland
Interests: GPS; GLONASS; Galileo; SBAS; GBAS; accuracy; EGNOS; aircraft position; GNSS satellite positioning; accuracy analysis; elements of exterior orientation; UAV positioning; UAV orientation; UAV navigation; flight parameters of UAV
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue will focus on new UAV photogrammetry trends. Photogrammetry based on unmanned aerial vehicles (UAV photogrammetry) due to the intensive development of UAV technology (fixed-wings, multi-rotors) and computer vision algorithms is currently a very popular technology. It finds applications in many areas. Based on images obtained from a low altitude, it is possible to generate dense point clouds, very high-resolution digital terrain models, digital surface models, and true-orthophotos. In geometric terms, most UAV photogrammetry research problems are similar to those in classical photogrammetry. However, other issues related to navigation, image matching, and radiometric quality of images are new. UAV photogrammetry introduces new possibilities for mapping areas with very high resolution. New research trends can also be seen in GNSS/IMU receivers and stereo imaging integration for UAV hybrid navigation.

The possibilities of modern photogrammetric software, which, thanks to the intensive development of structure from motion algorithms, allow you to implement many photogrammetric studies, are also noteworthy. Such technology is based not only on images obtained in the visible range but also multispectral images. Problems related to the implementation of deep learning methods for camera calibration, image orientation, bundle adjustment, and dense point cloud classification are also interesting research issues.

We seek submissions reviewing trends of UAV photogrammetry in, but not limited to, the fields of image quality, large area mapping, powerline inspection, positioning accuracy, and deep learning methods in matching images. Reviews on the trends of the quality control of UAV photogrammetric products, integration of on-board UAV photogrammetric sensors, and new multi-image matching methods are also welcome. In addition, we plan to include a review of UAV photogrammetry applications in areas such as monitoring of engineering investments, heritage and BIM based on UAV photogrammetric data, and detection and classification of objects based on images obtained from a low altitude.

Dr. Damian Wierzbicki
Dr. Kamil Krasuski
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • unmanned aerial vehicles (UAVs)
  • photogrammetry
  • dense image matching
  • georeferencing accuracy
  • GNSS RTK camera positioning
  • bundle block adjustment
  • image quality assessment
  • deep learning in stereo matching
  • point clouds
  • structure from motion
  • digital terrain model (DTM)
  • digital surface model (DSM)
  • true-ortho geometric accuracy assessment
  • mapping accuracy

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 5130 KiB  
Article
Estimation of the Block Adjustment Error in UAV Photogrammetric Flights in Flat Areas
by Alba Nely Arévalo-Verjel, José Luis Lerma, Juan F. Prieto, Juan Pedro Carbonell-Rivera and José Fernández
Remote Sens. 2022, 14(12), 2877; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14122877 - 16 Jun 2022
Cited by 3 | Viewed by 2504
Abstract
UAV-DAP (unmanned aerial vehicle-digital aerial photogrammetry) has become one of the most widely used geomatics techniques in the last decade due to its low cost and capacity to generate high-density point clouds, thus demonstrating its great potential for delivering high-precision products with a [...] Read more.
UAV-DAP (unmanned aerial vehicle-digital aerial photogrammetry) has become one of the most widely used geomatics techniques in the last decade due to its low cost and capacity to generate high-density point clouds, thus demonstrating its great potential for delivering high-precision products with a spatial resolution of centimetres. The questions is, how should it be applied to obtain the best results? This research explores different flat scenarios to analyse the accuracy of this type of survey based on photogrammetric SfM (structure from motion) technology, flight planning with ground control points (GCPs), and the combination of forward and cross strips, up to the point of processing. The RMSE (root mean square error) is analysed for each scenario to verify the quality of the results. An equation is adjusted to estimate the a priori accuracy of the photogrammetric survey with digital sensors, identifying the best option for μxyz (weight coefficients depending on the layout of both the GCP and the image network) for the four scenarios studied. The UAV flights were made in Lorca (Murcia, Spain). The study area has an extension of 80 ha, which was divided into four blocks. The GCPs and checkpoints (ChPs) were measured using dual-frequency GNSS (global navigation satellite system), with a tripod and centring system on the mark at the indicated point. The photographs were post-processed using the Agisoft Metashape Professional software (64 bits). The flights were made with two multirotor UAVs, a Phantom 3 Professional and an Inspire 2, with a Zenmuse X5S camera. We verify the influence by including additional forward and/or cross strips combined with four GCPs in the corners, plus one additional GCP in the centre, in order to obtain better photogrammetric adjustments based on the preliminary flight planning. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles for Photogrammetry)
Show Figures

Graphical abstract

21 pages, 3533 KiB  
Article
Application of the XBoost Regressor for an A Priori Prediction of UAV Image Quality
by Aleksandra Sekrecka
Remote Sens. 2021, 13(23), 4757; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13234757 - 24 Nov 2021
Cited by 1 | Viewed by 1651
Abstract
In general, the quality of imagery from Unmanned Aerial Vehicles (UAVs) is evaluated after the flight, and then a decision is made on the further value and use of the acquired data. In this paper, an a priori (preflight) image quality prediction methodology [...] Read more.
In general, the quality of imagery from Unmanned Aerial Vehicles (UAVs) is evaluated after the flight, and then a decision is made on the further value and use of the acquired data. In this paper, an a priori (preflight) image quality prediction methodology is proposed to estimate the preflight image quality and to avoid unfavourable flights, which is extremely important from a time and cost management point of view. The XBoost Regressor model and cross-validation were used for machine learning of the model and image quality prediction. The model was learned on a rich database of real-world images acquired from UAVs under conditions varying in both sensor type, UAV type, exposure parameters, weather, topography, and land cover. Radiometric quality indices (SNR, Entropy, PIQE, NIQE, BRISQUE, and NRPBM) were calculated for each image to train and test the model and to assess the accuracy of image quality prediction. Different variants of preflight parameter knowledge were considered in the study. The proposed methodology offers the possibility of predicting image quality with high accuracy. The correlation coefficient between the actual and predicted image quality, depending on the number of parameters known a priori, ranged from 0.90 to 0.96. The methodology was designed for data acquired from a UAV. Similar prediction accuracy is expected for other low-altitude or close-range photogrammetric data. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles for Photogrammetry)
Show Figures

Graphical abstract

19 pages, 29895 KiB  
Article
Comparison of Modelling Strategies to Estimate Phenotypic Values from an Unmanned Aerial Vehicle with Spectral and Temporal Vegetation Indexes
by Pengcheng Hu, Scott C. Chapman, Huidong Jin, Yan Guo and Bangyou Zheng
Remote Sens. 2021, 13(14), 2827; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13142827 - 19 Jul 2021
Cited by 8 | Viewed by 2077
Abstract
Aboveground dry weight (AGDW) and leaf area index (LAI) are indicators of crop growth status and grain yield as affected by interactions of genotype, environment, and management. Unmanned aerial vehicle (UAV) based remote sensing provides cost-effective and non-destructive methods for the high-throughput phenotyping [...] Read more.
Aboveground dry weight (AGDW) and leaf area index (LAI) are indicators of crop growth status and grain yield as affected by interactions of genotype, environment, and management. Unmanned aerial vehicle (UAV) based remote sensing provides cost-effective and non-destructive methods for the high-throughput phenotyping of crop traits (e.g., AGDW and LAI) through the integration of UAV-derived vegetation indexes (VIs) with statistical models. However, the effects of different modelling strategies that use different dataset compositions of explanatory variables (i.e., combinations of sources and temporal combinations of the VI datasets) on estimates of AGDW and LAI have rarely been evaluated. In this study, we evaluated the effects of three sources of VIs (visible, spectral, and combined) and three types of temporal combinations of the VI datasets (mono-, multi-, and full-temporal) on estimates of AGDW and LAI. The VIs were derived from visible (RGB) and multi-spectral imageries, which were acquired by a UAV-based platform over a wheat trial at five sampling dates before flowering. Partial least squares regression models were built with different modelling strategies to estimate AGDW and LAI at each prediction date. The results showed that models built with the three sources of mono-temporal VIs obtained similar performances for estimating AGDW (RRMSE = 11.86% to 15.80% for visible, 10.25% to 16.70% for spectral, and 10.25% to 16.70% for combined VIs) and LAI (RRMSE = 13.30% to 22.56% for visible, 12.04% to 22.85% for spectral, and 13.45% to 22.85% for combined VIs) across prediction dates. Mono-temporal models built with visible VIs outperformed the other two sources of VIs in general. Models built with mono-temporal VIs generally obtained better estimates than models with multi- and full-temporal VIs. The results suggested that the use of UAV-derived visible VIs can be an alternative to multi-spectral VIs for high-throughput and in-season estimates of AGDW and LAI. The combination of modelling strategies that used mono-temporal datasets and a self-calibration method demonstrated the potential for in-season estimates of AGDW and LAI (RRMSE normally less than 15%) in breeding or agronomy trials. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles for Photogrammetry)
Show Figures

Figure 1

25 pages, 29283 KiB  
Article
Improvement of UAV Positioning Performance Based on EGNOS+SDCM Solution
by Kamil Krasuski, Damian Wierzbicki and Mieczysław Bakuła
Remote Sens. 2021, 13(13), 2597; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132597 - 02 Jul 2021
Cited by 14 | Viewed by 2292
Abstract
The article presents the results of research on multi-SBAS (multi-satellite-based augmentation system) positioning in UAV (unmanned aerial vehicle) technology. For this purpose, a new solution was developed for combining the UAV position navigation solution from several SBAS systems. In this particular case, the [...] Read more.
The article presents the results of research on multi-SBAS (multi-satellite-based augmentation system) positioning in UAV (unmanned aerial vehicle) technology. For this purpose, a new solution was developed for combining the UAV position navigation solution from several SBAS systems. In this particular case, the presented linear combination algorithm is based on the fusion of EGNOS (European geostationary navigation overlay service) and SDCM (system of differential correction and monitoring) positioning to determine the resultant UAV coordinates. The algorithm of the mathematical model uses weights of measurements in three ways, i.e., Variant I, the reciprocal of the number of tracked satellites from a single SBAS solution; Variant II, the inverse square of mean coordinate errors from a single SBAS solution; and Variant III, the reciprocal of UAV flight speed from a single SBAS solution. The research experiment used real GNSS (global navigation satellite system) navigation data recorded by the VTOL unmanned platform. The test flight was made in April 2020 in Poland, near Warsaw. Based on the developed research results, it was found that the highest accuracy of UAV positioning was obtained when using the weighting model for Variant II. In the weight model of Variant II, the accuracy of the solution of the UAV position increased by 1–2% for the horizontal components and 19–22% for the vertical component h, concerning the results obtained from the weighing Variants I and III. It is worth noting that the proposed research model significantly improves the results of determining the ellipsoidal height h. Compared to the arithmetic mean model, determining the h component in the Variant II weight model is improved by about 23%. The paper also shows the advantage of EGNOS+SDCM positioning over EGNOS positioning alone in determining the accuracy of the vertical component h. The obtained research results show the significant advantages of the multi-SBAS positioning model in UAV technology. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles for Photogrammetry)
Show Figures

Figure 1

20 pages, 15030 KiB  
Article
Assessing Repeatability and Reproducibility of Structure-from-Motion Photogrammetry for 3D Terrain Mapping of Riverbeds
by Jessica De Marco, Eleonora Maset, Sara Cucchiaro, Alberto Beinat and Federico Cazorzi
Remote Sens. 2021, 13(13), 2572; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132572 - 01 Jul 2021
Cited by 11 | Viewed by 2773
Abstract
Structure-from-Motion (SfM) photogrammetry is increasingly employed in geomorphological applications for change detection, but repeatability and reproducibility of this methodology are still insufficiently documented. This work aims to evaluate the influence of different survey acquisition and processing conditions, including the camera used for image [...] Read more.
Structure-from-Motion (SfM) photogrammetry is increasingly employed in geomorphological applications for change detection, but repeatability and reproducibility of this methodology are still insufficiently documented. This work aims to evaluate the influence of different survey acquisition and processing conditions, including the camera used for image collection, the number of Ground Control Points (GCPs) employed during Bundle Adjustment, GCP coordinate precision and Unmanned Aerial Vehicle flight mode. The investigation was carried out over three fluvial study areas characterized by distinct morphology, performing multiple flights consecutively and assessing possible differences among the resulting 3D models. We evaluated both residuals on check points and discrepancies between dense point clouds. Analyzing these metrics, we noticed high repeatability (Root Mean Square of signed cloud-to-cloud distances less than 2.1 cm) for surveys carried out under the same conditions. By varying the camera used, instead, contrasting results were obtained that appear to depend on the study site characteristics. In particular, lower reproducibility was highlighted for the surveys involving an area characterized by flat topography and homogeneous texturing. Moreover, this study confirms the importance of the number of GCPs entering in the processing workflow, with different impact depending on the camera used for the survey. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles for Photogrammetry)
Show Figures

Graphical abstract

32 pages, 11619 KiB  
Article
Analysis of the Possibilities of Using Different Resolution Digital Elevation Models in the Study of Microrelief on the Example of Terrain Passability
by Wojciech Dawid and Krzysztof Pokonieczny
Remote Sens. 2020, 12(24), 4146; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12244146 - 18 Dec 2020
Cited by 14 | Viewed by 2027
Abstract
In this article, we discuss issues concerning the development of detailed passability maps, which are used in the crisis management process and for military purposes. The paper presents the authorial methodology of the automatic generation of these maps with the use of high-resolution [...] Read more.
In this article, we discuss issues concerning the development of detailed passability maps, which are used in the crisis management process and for military purposes. The paper presents the authorial methodology of the automatic generation of these maps with the use of high-resolution digital elevation models (DEMs) acquired from airborne laser scanning (light detection and ranging (LIDAR)) and photogrammetric data obtained from unmanned aerial vehicle (UAV) measurements. The aim of the article is to conduct a detailed comparison of these models in the context of their usage in passability map development. The proposed algorithm of map generation was tested comprehensively in terms of the source of the used spatial data, the resolution, and the types of vehicles moving in terrain. Tests were conducted on areas with a diversified landform, with typical forms of relief that hinder vehicle movement (bluffs and streams). Due to the huge amount of data to be processed, the comprehensive analysis of the possibilities of using DEMs in different configurations of pixel size was executed. This allowed for decreasing the resolution of the model while maintaining the appropriate accuracy properties of the resulting passability map. The obtained results showed insignificant disparities between both sources of used DEMs and demonstrated that using the model with the 2.5 m pixel size did not significantly degrade the accuracy of the passability maps, which has a huge impact on their generation time. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles for Photogrammetry)
Show Figures

Graphical abstract

35 pages, 8386 KiB  
Article
Methodology of Processing Single-Strip Blocks of Imagery with Reduction and Optimization Number of Ground Control Points in UAV Photogrammetry
by Marta Lalak, Damian Wierzbicki and Michał Kędzierski
Remote Sens. 2020, 12(20), 3336; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12203336 - 13 Oct 2020
Cited by 10 | Viewed by 2923
Abstract
Unmanned aerial vehicle (UAV) systems are often used to collect high-resolution imagery. Data obtained from UAVs are now widely used for both military and civilian purposes. This article discusses the issues related to the use of UAVs for the imaging of restricted areas. [...] Read more.
Unmanned aerial vehicle (UAV) systems are often used to collect high-resolution imagery. Data obtained from UAVs are now widely used for both military and civilian purposes. This article discusses the issues related to the use of UAVs for the imaging of restricted areas. Two methods of developing single-strip blocks with the optimal number of ground control points are presented. The proposed methodology is based on a modified linear regression model and an empirically modified Levenberg–Marquardt–Powell algorithm. The effectiveness of the proposed methods of adjusting a single-strip block were verified based on several test sets. For method I, the mean square errors (RMSE) values for the X, Y, Z coordinates of the control points were within the range of 0.03–0.13 m/0.08–0.09 m, and for the second method, 0.03–0.04 m/0.06–0.07 m. For independent control points, the RMSE values were 0.07–0.12 m/0.06–0.07 m for the first method and 0.07–0.12 m/0.07–0.09 m for the second method. The results of the single-strip block adjustment showed that the use of the modified Levenberg–Marquardt–Powell method improved the adjustment accuracy by 13% and 16%, respectively. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles for Photogrammetry)
Show Figures

Graphical abstract

22 pages, 6724 KiB  
Article
A Novel Method for the Deblurring of Photogrammetric Images Using Conditional Generative Adversarial Networks
by Pawel Burdziakowski
Remote Sens. 2020, 12(16), 2586; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12162586 - 11 Aug 2020
Cited by 16 | Viewed by 4835
Abstract
The visual data acquisition from small unmanned aerial vehicles (UAVs) may encounter a situation in which blur appears on the images. Image blurring caused by camera motion during exposure significantly impacts the images interpretation quality and consequently the quality of photogrammetric products. On [...] Read more.
The visual data acquisition from small unmanned aerial vehicles (UAVs) may encounter a situation in which blur appears on the images. Image blurring caused by camera motion during exposure significantly impacts the images interpretation quality and consequently the quality of photogrammetric products. On blurred images, it is difficult to visually locate ground control points, and the number of identified feature points decreases rapidly together with an increasing blur kernel. The nature of blur can be non-uniform, which makes it hard to forecast for traditional deblurring methods. Due to the above, the author of this publication concluded that the neural methods developed in recent years were able to eliminate blur on UAV images with an unpredictable or highly variable blur nature. In this research, a new, rapid method based on generative adversarial networks (GANs) was applied for deblurring. A data set for neural network training was developed based on real aerial images collected over the last few years. More than 20 full sets of photogrammetric products were developed, including point clouds, orthoimages and digital surface models. The sets were generated from both blurred and deblurred images using the presented method. The results presented in the publication show that the method for improving blurred photo quality significantly contributed to an improvement in the general quality of typical photogrammetric products. The geometric accuracy of the products generated from deblurred photos was maintained despite the rising blur kernel. The quality of textures and input photos was increased. This research proves that the developed method based on neural networks can be used for deblur, even in highly blurred images, and it significantly increases the final geometric quality of the photogrammetric products. In practical cases, it will be possible to implement an additional feature in the photogrammetric software, which will eliminate unwanted blur and allow one to use almost all blurred images in the modelling process. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles for Photogrammetry)
Show Figures

Graphical abstract

Back to TopTop