UAV Photogrammetry for 3D Modeling

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 36166

Special Issue Editors


E-Mail Website
Guest Editor
School of Spatial Planning and Development, Aristotle University of Thessaloniki, 541 24 Thessaloniki, Greece
Interests: photogrammetry; geomatics; cultural heritage; documentation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Construction and Technology in Architecture (DCTA), Escuela Técnica Superior de Arquitectura de Madrid (ETSAM), Universidad Politécnica de Madrid, Madrid, Spain
Interests: cultural heritage; geomatics; laser scanning; photogrammetry; diagnosis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Unmanned aerial vehicles (UAVs) have been extensively used in many application areas. They are able to carry several sensors, capture aerial imagery, or even acquire point clouds depending on the application area and the sensor equipped on the UAV.

This Special Issue aspires to focus on the latest advances in 3D modeling issues, targeting both natural and built environments. This Special Issue seeks high-quality papers that explore all the potentialities offered by these platforms and the latest developments in data acquisition, processing, and 3D modeling in a wide spectrum of applications.

I would like to invite you to contribute to this Special Issue “UAV Photogrammetry for 3D Modeling” by submitting articles concerning your recent research, experimental work, reviews, and/or case studies related to the topic encapsulated by the title. Contributions may be on UAV image processing methods for photogrammetric applications, mapping and 3D modeling issues, and any other aspects related to the Special Issue theme.

This Special Issue provides a scientific basis for any researcher and professional in the UAV ecosystem to make the best possible use and presentation of technological developments, both in hardware and software, toward using this incredible ‘infrastructure’ in the entire 3D modeling workflow.

You may choose our Joint Special Issue in Remote Sensing.

Prof. Dr. Efstratios Stylianidis
Prof. Dr. Luis Javier Sánchez-Aparicio
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Unmanned aerial vehicles, drones
  • 3D modeling
  • Photogrammetry, computer vision, and remote sensing
  • Mapping, surveying
  • Sensors
  • Natural environment, built environment

Related Special Issue

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 6845 KiB  
Article
3D AQI Mapping Data Assessment of Low-Altitude Drone Real-Time Air Pollution Monitoring
by Sarun Duangsuwan, Phoowadon Prapruetdee, Mallika Subongkod and Katanyoo Klubsuwan
Drones 2022, 6(8), 191; https://0-doi-org.brum.beds.ac.uk/10.3390/drones6080191 - 29 Jul 2022
Cited by 9 | Viewed by 3186
Abstract
Air pollution primarily originates from substances that are directly emitted from natural or anthropogenic processes, such as carbon monoxide (CO) gas emitted in vehicle exhaust or sulfur dioxide (SO2) released from factories. However, a major air pollution problem is particulate matter [...] Read more.
Air pollution primarily originates from substances that are directly emitted from natural or anthropogenic processes, such as carbon monoxide (CO) gas emitted in vehicle exhaust or sulfur dioxide (SO2) released from factories. However, a major air pollution problem is particulate matter (PM), which is an adverse effect of wildfires and open burning. Application tools for air pollution monitoring in risk areas using real-time monitoring with drones have emerged. A new air quality index (AQI) for monitoring and display, such as three-dimensional (3D) mapping based on data assessment, is essential for timely environmental surveying. The objective of this paper is to present a 3D AQI mapping data assessment using a hybrid model based on a machine-learning method for drone real-time air pollution monitoring (Dr-TAPM). Dr-TAPM was designed by equipping drones with multi-environmental sensors for carbon monoxide (CO), ozone (O3), nitrogen dioxide (NO2), particulate matter (PM2.5,10), and sulfur dioxide (SO2), with data pre- and post-processing with the hybrid model. The hybrid model for data assessment was proposed using backpropagation neural network (BPNN) and convolutional neural network (CNN) algorithms. Experimentally, we considered a case study detecting smoke emissions from an open burning scenario. As a result, PM2.5,10 and CO were detected as air pollutants from open burning. 3D AQI map locations were shown and the validation learning rates were apparent, as the accuracy of predicted AQI data assessment was 98%. Full article
(This article belongs to the Special Issue UAV Photogrammetry for 3D Modeling)
Show Figures

Figure 1

24 pages, 82754 KiB  
Article
Oblique View Selection for Efficient and Accurate Building Reconstruction in Rural Areas Using Large-Scale UAV Images
by Yubin Liang, Xiaochang Fan, Yang Yang, Deqian Li and Tiejun Cui
Drones 2022, 6(7), 175; https://0-doi-org.brum.beds.ac.uk/10.3390/drones6070175 - 16 Jul 2022
Cited by 6 | Viewed by 2248
Abstract
3D building models are widely used in many applications. The traditional image-based 3D reconstruction pipeline without using semantic information is inefficient for building reconstruction in rural areas. An oblique view selection methodology for efficient and accurate building reconstruction in rural areas is proposed [...] Read more.
3D building models are widely used in many applications. The traditional image-based 3D reconstruction pipeline without using semantic information is inefficient for building reconstruction in rural areas. An oblique view selection methodology for efficient and accurate building reconstruction in rural areas is proposed in this paper. A Mask R-CNN model is trained using satellite datasets and used to detect building instances in nadir UAV images. Then, the detected building instances and UAV images are directly georeferenced. The georeferenced building instances are used to select oblique images that cover buildings by using nearest neighbours search. Finally, precise match pairs are generated from the selected oblique images and nadir images using their georeferenced principal points. The proposed methodology is tested on a dataset containing 9775 UAV images. A total of 4441 oblique images covering 99.4% of all the buildings in the survey area are automatically selected. Experimental results show that the average precision and recall of the oblique view selection are 0.90 and 0.88, respectively. The percentage of robustly matched oblique-oblique and oblique-nadir image pairs are above 94% and 84.0%, respectively. The proposed methodology is evaluated for sparse and dense reconstruction. Experimental results show that the sparse reconstruction based on the proposed methodology reduces 68.9% of the data processing time, and it is comparably accurate and complete. Experimental results also show high consistency between the dense point clouds of buildings reconstructed by the traditional pipeline and the pipeline based on the proposed methodology. Full article
(This article belongs to the Special Issue UAV Photogrammetry for 3D Modeling)
Show Figures

Figure 1

17 pages, 6985 KiB  
Article
Super-Resolution Images Methodology Applied to UAV Datasets to Road Pavement Monitoring
by Laura Inzerillo, Francesco Acuto, Gaetano Di Mino and Mohammed Zeeshan Uddin
Drones 2022, 6(7), 171; https://0-doi-org.brum.beds.ac.uk/10.3390/drones6070171 - 12 Jul 2022
Cited by 14 | Viewed by 2890
Abstract
The increasingly widespread use of smartphones as real cameras on drones has allowed an ever-greater development of several algorithms to improve the image’s refinement. Although the latest generations of drone cameras let the user achieve high resolution images, the large number of pixels [...] Read more.
The increasingly widespread use of smartphones as real cameras on drones has allowed an ever-greater development of several algorithms to improve the image’s refinement. Although the latest generations of drone cameras let the user achieve high resolution images, the large number of pixels to be processed and the acquisitions from multiple lengths for stereo-view often fail to guarantee satisfactory results. In particular, high flight altitudes strongly impact the accuracy, and result in images which are undefined or blurry. This is not acceptable in the field of road pavement monitoring. In that case, the conventional algorithms used for the image resolution conversion, such as the bilinear interpolation algorithm, do not allow high frequency information to be retrieved from an undefined capture. This aspect is felt more strongly when using the recorded images to build a 3D scenario, since its geometric accuracy is greater when the resolution of the photos is higher. Super-Resolution algorithms (SRa) are utilized when registering multiple low-resolution images to interpolate sub-pixel information The aim of this work is to assess, at high flight altitudes, the geometric precision of a 3D model by using the the Morpho Super-Resolution™ algorithm for a road pavement distress monitoring case study. Full article
(This article belongs to the Special Issue UAV Photogrammetry for 3D Modeling)
Show Figures

Figure 1

11 pages, 5652 KiB  
Article
UAV Mapping and 3D Modeling as a Tool for Promotion and Management of the Urban Space
by Alexandros Skondras, Eleni Karachaliou, Ioannis Tavantzis, Nikolaos Tokas, Elena Valari, Ifigeneia Skalidi, Giovanni Augusto Bouvet and Efstratios Stylianidis
Drones 2022, 6(5), 115; https://0-doi-org.brum.beds.ac.uk/10.3390/drones6050115 - 03 May 2022
Cited by 15 | Viewed by 5031
Abstract
In the past few decades, the management of urban spaces with appropriate tools has been in constant discussion due to the plethora of new technologies that have emerged for participatory planning, drone mapping, photogrammetry and 3D modeling. In a multitude of situations, considerable [...] Read more.
In the past few decades, the management of urban spaces with appropriate tools has been in constant discussion due to the plethora of new technologies that have emerged for participatory planning, drone mapping, photogrammetry and 3D modeling. In a multitude of situations, considerable progress has been made regarding the strategic impact of the successful use of technology for the development of urban spaces. The current era provides us with important digital tools and the opportunity to test new perspectives in the sustainable development of cities. This paper aims to explore the contribution of UAVs to the spatial mapping process of urban space, with the goal of collecting quantifiable and qualitative information to use for 3D modeling that can enable a more comprehensive understanding of the urban environment, thus facilitating urban regeneration processes. Three-dimensional models of high accuracy are not mandatory for this research. The location of the selected research area is particularly interesting due to its boundaries, urban voids and public space that can evolve through public participation. The results can be used for crowdsourcing in participatory decision-making processes and for exploring the consequences that these have on the built environment, and they can be used as a new means of involvement of citizens in local decision-making processes. Full article
(This article belongs to the Special Issue UAV Photogrammetry for 3D Modeling)
Show Figures

Figure 1

22 pages, 11826 KiB  
Article
New Supplementary Photography Methods after the Anomalous of Ground Control Points in UAV Structure-from-Motion Photogrammetry
by Jia Yang, Xiaopeng Li, Lei Luo, Lewen Zhao, Juan Wei and Teng Ma
Drones 2022, 6(5), 105; https://0-doi-org.brum.beds.ac.uk/10.3390/drones6050105 - 24 Apr 2022
Cited by 7 | Viewed by 3315
Abstract
Recently, multirotor UAVs have been widely used in high-precision terrain mapping, cadastral surveys and other fields due to their low cost, flexibility, and high efficiency. Indirect georeferencing of ground control points (GCPs) is often required to obtain highly accurate topographic products such as [...] Read more.
Recently, multirotor UAVs have been widely used in high-precision terrain mapping, cadastral surveys and other fields due to their low cost, flexibility, and high efficiency. Indirect georeferencing of ground control points (GCPs) is often required to obtain highly accurate topographic products such as orthoimages and digital surface models. However, in practical projects, GCPs are susceptible to anomalies caused by external factors (GCPs covered by foreign objects such as crops and cars, vandalism, etc.), resulting in a reduced availability of UAV images. The errors associated with the loss of GCPs are apparent. The widely used solution of using natural feature points as ground control points often fails to meet the high accuracy requirements. For the problem of control point anomalies, this paper innovatively presents two new methods of completing data fusion by supplementing photos via UAV at a later stage. In this study, 72 sets of experiments were set up, including three control experiments for analysis. Two parameters were used for accuracy assessment: Root Mean Square Error (RMSE) and Multiscale Model to Model Cloud Comparison (M3C2). The study shows that the two new methods can meet the reference accuracy requirements in horizontal direction and elevation direction (RMSEX = 70.40 mm, RMSEY = 53.90 mm, RMSEZ = 87.70 mm). In contrast, the natural feature points as ground control points showed poor accuracy, with RMSEX = 94.80 mm, RMSEY = 68.80 mm, and RMSEZ = 104.40 mm for the checkpoints. This research considers and solves the problems of anomalous GCPs in the photogrammetry project from a unique perspective of supplementary photography, and proposes two new methods that greatly expand the means of solving the problem. In UAV high-precision projects, they can be used as an effective means to ensure accuracy when the GCP is anomalous, which has significant potential for application promotion. Compared with previous methods, they can be applied in more scenarios and have higher compatibility and operability. These two methods can be widely applied in cadastral surveys, geomorphological surveys, heritage conservation, and other fields. Full article
(This article belongs to the Special Issue UAV Photogrammetry for 3D Modeling)
Show Figures

Figure 1

15 pages, 20945 KiB  
Communication
CNN-Based Dense Monocular Visual SLAM for Real-Time UAV Exploration in Emergency Conditions
by Anne Steenbeek and Francesco Nex
Drones 2022, 6(3), 79; https://doi.org/10.3390/drones6030079 - 18 Mar 2022
Cited by 28 | Viewed by 7683
Abstract
Unmanned Aerial Vehicles (UAVs) for 3D indoor mapping applications are often equipped with bulky and expensive sensors, such as LIDAR (Light Detection and Ranging) or depth cameras. The same task could be also performed by inexpensive RGB cameras installed on light and small [...] Read more.
Unmanned Aerial Vehicles (UAVs) for 3D indoor mapping applications are often equipped with bulky and expensive sensors, such as LIDAR (Light Detection and Ranging) or depth cameras. The same task could be also performed by inexpensive RGB cameras installed on light and small platforms that are more agile to move in confined spaces, such as during emergencies. However, this task is still challenging because of the absence of a GNSS (Global Navigation Satellite System) signal that limits the localization (and scaling) of the UAV. The reduced density of points in feature-based monocular SLAM (Simultaneous Localization and Mapping) then limits the completeness of the delivered maps. In this paper, the real-time capabilities of a commercial, inexpensive UAV (DJI Tello) for indoor mapping are investigated. The work aims to assess its suitability for quick mapping in emergency conditions to support First Responders (FR) during rescue operations in collapsed buildings. The proposed solution only uses images in input and integrates SLAM and CNN-based (Convolutional Neural Networks) Single Image Depth Estimation (SIDE) algorithms to densify and scale the data and to deliver a map of the environment suitable for real-time exploration. The implemented algorithms, the training strategy of the network, and the first tests on the main elements of the proposed methodology are reported in detail. The results achieved in real indoor environments are also presented, demonstrating performances that are compatible with FRs’ requirements to explore indoor volumes before entering the building. Full article
(This article belongs to the Special Issue UAV Photogrammetry for 3D Modeling)
Show Figures

Figure 1

16 pages, 3009 KiB  
Article
Classification of Photogrammetric and Airborne LiDAR Point Clouds Using Machine Learning Algorithms
by Zaide Duran, Kubra Ozcan and Muhammed Enes Atik
Drones 2021, 5(4), 104; https://0-doi-org.brum.beds.ac.uk/10.3390/drones5040104 - 24 Sep 2021
Cited by 13 | Viewed by 3506
Abstract
With the development of photogrammetry technologies, point clouds have found a wide range of use in academic and commercial areas. This situation has made it essential to extract information from point clouds. In particular, artificial intelligence applications have been used to extract information [...] Read more.
With the development of photogrammetry technologies, point clouds have found a wide range of use in academic and commercial areas. This situation has made it essential to extract information from point clouds. In particular, artificial intelligence applications have been used to extract information from point clouds to complex structures. Point cloud classification is also one of the leading areas where these applications are used. In this study, the classification of point clouds obtained by aerial photogrammetry and Light Detection and Ranging (LiDAR) technology belonging to the same region is performed by using machine learning. For this purpose, nine popular machine learning methods have been used. Geometric features obtained from point clouds were used for the feature spaces created for classification. Color information is also added to these in the photogrammetric point cloud. According to the LiDAR point cloud results, the highest overall accuracies were obtained as 0.96 with the Multilayer Perceptron (MLP) method. The lowest overall accuracies were obtained as 0.50 with the AdaBoost method. The method with the highest overall accuracy was achieved with the MLP (0.90) method. The lowest overall accuracy method is the GNB method with 0.25 overall accuracy. Full article
(This article belongs to the Special Issue UAV Photogrammetry for 3D Modeling)
Show Figures

Figure 1

18 pages, 34779 KiB  
Article
Evaluating the Performance of sUAS Photogrammetry with PPK Positioning for Infrastructure Mapping
by Conor McMahon, Omar E. Mora and Michael J. Starek
Drones 2021, 5(2), 50; https://0-doi-org.brum.beds.ac.uk/10.3390/drones5020050 - 01 Jun 2021
Cited by 7 | Viewed by 5199
Abstract
Traditional acquisition methods for generating digital surface models (DSMs) of infrastructure are either low resolution and slow (total station-based methods) or expensive (LiDAR). By contrast, photogrammetric methods have recently received attention due to their ability to generate dense 3D models quickly for low [...] Read more.
Traditional acquisition methods for generating digital surface models (DSMs) of infrastructure are either low resolution and slow (total station-based methods) or expensive (LiDAR). By contrast, photogrammetric methods have recently received attention due to their ability to generate dense 3D models quickly for low cost. However, existing frameworks often utilize many manually measured control points, require a permanent RTK/PPK reference station, or yield a reconstruction accuracy too poor to be useful in many applications. In addition, the causes of inaccuracy in photogrammetric imagery are complex and sometimes not well understood. In this study, a small unmanned aerial system (sUAS) was used to rapidly image a relatively even, 1 ha ground surface. Model accuracy was investigated to determine the importance of ground control point (GCP) count and differential GNSS base station type. Results generally showed the best performance for tests using five or more GCPs or when a Continuously Operating Reference Station (CORS) was used, with vertical root mean square errors of 0.026 and 0.027 m in these cases. However, accuracy outputs generally met comparable published results in the literature, demonstrating the viability of analyses relying solely on a temporary local base with a one hour dwell time and no GCPs. Full article
(This article belongs to the Special Issue UAV Photogrammetry for 3D Modeling)
Show Figures

Figure 1

Back to TopTop