sensors-logo

Journal Browser

Journal Browser

Unmanned Aerial Systems and Remote Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 18551

Special Issue Editors


E-Mail
Guest Editor
Department of Graphic and Geomatic Engineering, University of Córdoba, 14071 Cordoba, Spain
Interests: UAV, Remote Sensing

Special Issue Information

Dear Colleagues,

Unmanned aerial system (UAS) applications have become an expanded area in remote sensing in the last decade. Advances and developments in unmanned aerial vehicle (UAV) platforms, sensors, and image processing have resulted in increasing use of this technology in the remote sensing community.

This Special Issue focuses on Unmanned Aerial Systems and Remote Sensing and its scope includes descriptions of processing algorithms/methodologies, as well as the interpretation of spatio-temporal agricultural, forestry, geological, ecological, environmental, and mapping, in general, using data from sensors on-board UAV.

Authors are invited to contribute to this Special Issue of Sensors by submitting an original manuscript.

Contributions may focus on, but are not limited to the following:

  • UAS sensor design
  • Processing algorithms applied to UAS-based imagery datasets
  • Radiometric and spectral calibration of UAS-based sensors
  • UAS-based: RGB, multispectral, hyperspectral, and thermal imaging
  • UAS-based LiDAR
  • UAS-based monitoring
  • Artificial intelligence strategies: classification, object detection.
  • Decision support system (artificial intelligence, machine learning, deep learning)
  • UAS sensor applications: precision agriculture, forestry, spatial ecology, pest detection, civil engineering, natural disaster, emergencies, fire prevention, land use, mapping, pollution monitoring, among others.

Dr. José Emilio Meroño de Larriva
Prof. Dr. Francisco Javier Mesas Carrascosa
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 4585 KiB  
Article
A Lightweight Remote Sensing Payload for Wildfire Detection and Fire Radiative Power Measurements
by Troy D. Thornberry, Ru-Shan Gao, Steven J. Ciciora, Laurel A. Watts, Richard J. McLaughlin, Angelina Leonardi, Karen H. Rosenlof, Brian M. Argrow, Jack S. Elston, Maciej Stachura, Joshua Fromm, W. Alan Brewer, Paul Schroeder and Michael Zucker
Sensors 2023, 23(7), 3514; https://0-doi-org.brum.beds.ac.uk/10.3390/s23073514 - 27 Mar 2023
Viewed by 2069
Abstract
Small uncrewed aerial systems (sUASs) have the potential to serve as ideal platforms for high spatial and temporal resolution wildfire measurements to complement aircraft and satellite observations, but typically have very limited payload capacity. Recognizing the need for improved data from wildfire management [...] Read more.
Small uncrewed aerial systems (sUASs) have the potential to serve as ideal platforms for high spatial and temporal resolution wildfire measurements to complement aircraft and satellite observations, but typically have very limited payload capacity. Recognizing the need for improved data from wildfire management and smoke forecasting communities and the potential advantages of sUAS platforms, the Nighttime Fire Observations eXperiment (NightFOX) project was funded by the US National Oceanic and Atmospheric Administration (NOAA) to develop a suite of miniaturized, relatively low-cost scientific instruments for wildfire-related measurements that would satisfy the size, weight and power constraints of a sUAS payload. Here we report on a remote sensing system developed under the NightFOX project that consists of three optical instruments with five individual sensors for wildfire mapping and fire radiative power measurement and a GPS-aided inertial navigation system module for aircraft position and attitude determination. The first instrument consists of two scanning telescopes with infrared (IR) channels using narrow wavelength bands near 1.6 and 4 µm to make fire radiative power measurements with a blackbody equivalent temperature range of 320–1500 °C. The second instrument is a broadband shortwave (0.95–1.7 µm) IR imager for high spatial resolution fire mapping. Both instruments are custom built. The third instrument is a commercial off-the-shelf visible/thermal IR dual camera. The entire system weighs about 1500 g and consumes approximately 15 W of power. The system has been successfully operated for fire observations using a Black Swift Technologies S2 small, fixed-wing UAS for flights over a prescribed grassland burn in Colorado and onboard an NOAA Twin Otter crewed aircraft over several western US wildfires during the 2019 Fire Influence on Regional to Global Environments and Air Quality (FIREX-AQ) field mission. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems and Remote Sensing)
Show Figures

Graphical abstract

25 pages, 17556 KiB  
Article
Accuracy Analysis of a New Data Processing Method for Landslide Monitoring Based on Unmanned Aerial System Photogrammetry
by Ivan Jakopec, Ante Marendić and Igor Grgac
Sensors 2023, 23(6), 3097; https://0-doi-org.brum.beds.ac.uk/10.3390/s23063097 - 14 Mar 2023
Cited by 1 | Viewed by 1284
Abstract
One of the most commonly used surveying techniques for landslide monitoring is a photogrammetric survey using an Unmanned Aerial System (UAS), where landslide displacements can be determined by comparing dense point clouds, digital terrain models, and digital orthomosaic maps resulting from different measurement [...] Read more.
One of the most commonly used surveying techniques for landslide monitoring is a photogrammetric survey using an Unmanned Aerial System (UAS), where landslide displacements can be determined by comparing dense point clouds, digital terrain models, and digital orthomosaic maps resulting from different measurement epochs. A new data processing method for calculating landslide displacements based on UAS photogrammetric survey data is presented in this paper, whose main advantage is the fact that it does not require the production of the above-mentioned products, enabling faster and simpler displacement determination. The proposed method is based on matching features between the images from two different UAS photogrammetric surveys and calculating the displacements based only on the comparison of two reconstructed sparse point clouds. The accuracy of the method was analyzed on a test field with simulated displacements and on an active landslide in Croatia. Moreover, the results were compared with the results obtained with a commonly used method based on comparing manually tracked features on orthomosaics from different epochs. Analysis of the test field results using the presented method show the ability to determine displacements with a centimeter level accuracy in ideal conditions even with a flight height of 120 m, and on the Kostanjek landslide with a sub-decimeter level accuracy. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems and Remote Sensing)
Show Figures

Figure 1

26 pages, 9712 KiB  
Article
Countering a Drone in a 3D Space: Analyzing Deep Reinforcement Learning Methods
by Ender Çetin, Cristina Barrado and Enric Pastor
Sensors 2022, 22(22), 8863; https://0-doi-org.brum.beds.ac.uk/10.3390/s22228863 - 16 Nov 2022
Cited by 4 | Viewed by 2341
Abstract
Unmanned aerial vehicles (UAV), also known as drones have been used for a variety of reasons and the commercial drone market growth is expected to reach remarkable levels in the near future. However, some drone users can mistakenly or intentionally fly into flight [...] Read more.
Unmanned aerial vehicles (UAV), also known as drones have been used for a variety of reasons and the commercial drone market growth is expected to reach remarkable levels in the near future. However, some drone users can mistakenly or intentionally fly into flight paths at major airports, flying too close to commercial aircraft or invading people’s privacy. In order to prevent these unwanted events, counter-drone technology is needed to eliminate threats from drones and hopefully they can be integrated into the skies safely. There are various counter-drone methods available in the industry. However, a counter-drone system supported by an artificial intelligence (AI) method can be an efficient way to fight against drones instead of human intervention. In this paper, a deep reinforcement learning (DRL) method has been proposed to counter a drone in a 3D space by using another drone. In a 2D space it is already shown that the deep reinforcement learning method is an effective way to counter a drone. However, countering a drone in a 3D space with another drone is a very challenging task considering the time required to train and avoid obstacles at the same time. A Deep Q-Network (DQN) algorithm with dueling network architecture and prioritized experience replay is presented to catch another drone in the environment provided by an Airsim simulator. The models have been trained and tested with different scenarios to analyze the learning progress of the drone. Experiences from previous training are also transferred before starting a new training by pre-processing the previous experiences and eliminating those considered as bad experiences. The results show that the best models are obtained with transfer learning and the drone learning progress has been increased dramatically. Additionally, an algorithm which combines imitation learning and reinforcement learning is implemented to catch the target drone. In this algorithm, called deep q-learning from demonstrations (DQfD), expert demonstrations data and self-generated data by the agent are sampled and the agent continues learning without overwriting the demonstration data. The main advantage of this algorithm is to accelerate the learning process even if there is a small amount of demonstration data. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems and Remote Sensing)
Show Figures

Figure 1

19 pages, 4095 KiB  
Article
Optimization of Medication Delivery Drone with IoT-Guidance Landing System Based on Direction and Intensity of Light
by Mohamed Osman Baloola, Fatimah Ibrahim and Mas S. Mohktar
Sensors 2022, 22(11), 4272; https://0-doi-org.brum.beds.ac.uk/10.3390/s22114272 - 03 Jun 2022
Cited by 4 | Viewed by 2956
Abstract
This paper presents an optimization of the medication delivery drone with the Internet of Things (IoT)-Guidance Landing System based on direction and intensity of light. The IoT-GLS was incorporated into the system to assist the drone’s operator or autonomous system to select the [...] Read more.
This paper presents an optimization of the medication delivery drone with the Internet of Things (IoT)-Guidance Landing System based on direction and intensity of light. The IoT-GLS was incorporated into the system to assist the drone’s operator or autonomous system to select the best landing angles for landing. The landing selection was based on the direction and intensity of the light. The medication delivery drone system was developed using an Arduino Uno microcontroller board, ESP32 DevKitC V4 board, multiple sensors, and IoT mobile apps to optimize face detection. This system can detect and compare real-time light intensity from all directions. The results showed that the IoT-GLS has improved the distance of detection by 192% in a dark environment and exhibited an improvement in face detection distance up to 147 cm in a room with low light intensity. Furthermore, a significant correlation was found between face recognition’s detection distance, light source direction, light intensity, and light color (p < 0.05). The findings of an optimal efficiency of facial recognition for medication delivery was achieved due to the ability of the IoT-GLS to select the best angle of landing based on the light direction and intensity. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems and Remote Sensing)
Show Figures

Figure 1

19 pages, 3404 KiB  
Article
Transmission Line Vibration Damper Detection Using Multi-Granularity Conditional Generative Adversarial Nets Based on UAV Inspection Images
by Wenxiang Chen, Yingna Li and Zhengang Zhao
Sensors 2022, 22(5), 1886; https://0-doi-org.brum.beds.ac.uk/10.3390/s22051886 - 28 Feb 2022
Cited by 4 | Viewed by 1963
Abstract
The vibration dampers can eliminate the galloping phenomenon of transmission lines caused by the wind. The detection of vibration dampers based on visual technology is an important issue. Current CNN-based methods struggle to meet the requirements of real-time detection. Therefore, the current vibration [...] Read more.
The vibration dampers can eliminate the galloping phenomenon of transmission lines caused by the wind. The detection of vibration dampers based on visual technology is an important issue. Current CNN-based methods struggle to meet the requirements of real-time detection. Therefore, the current vibration damper detection work has mainly been carried out manually. In view of the above situation, we propose a vibration damper detection-image generation model called DamperGAN based on multi-granularity Conditional Generative Adversarial Nets. DamperGAN first generates a low-resolution detection result image based on a coarse-grained module, then uses Monte Carlo search to mine the latent information in the low-resolution image, and finally injects this information into a fine-grained module through an attention mechanism to output high-resolution images and penalize poor intermediate information. At the same time, we propose a multi-level discriminator based on the multi-task learning mechanism to improve the discriminator’s discriminative ability and promote the generator to output better images. Finally, experiments on the self-built DamperGenSet dataset show that the images generated by our model are superior to the current mainstream baselines in both resolution and quality. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems and Remote Sensing)
Show Figures

Figure 1

14 pages, 4062 KiB  
Article
Individualization of Pinus radiata Canopy from 3D UAV Dense Point Clouds Using Color Vegetation Indices
by Antonio M. Cabrera-Ariza, Miguel A. Lara-Gómez, Rómulo E. Santelices-Moya, Jose-Emilio Meroño de Larriva and Francisco-Javier Mesas-Carrascosa
Sensors 2022, 22(4), 1331; https://0-doi-org.brum.beds.ac.uk/10.3390/s22041331 - 09 Feb 2022
Cited by 3 | Viewed by 2366
Abstract
The location of trees and the individualization of their canopies are important parameters to estimate diameter, height, and biomass, among other variables. The very high spatial resolution of UAV imagery supports these processes. A dense 3D point cloud is generated from RGB UAV [...] Read more.
The location of trees and the individualization of their canopies are important parameters to estimate diameter, height, and biomass, among other variables. The very high spatial resolution of UAV imagery supports these processes. A dense 3D point cloud is generated from RGB UAV images, which is used to obtain a digital elevation model (DEM). From this DEM, a canopy height model (CHM) is derived for individual tree identification. Although the results are satisfactory, the quality of this detection is reduced if the working area has a high density of vegetation. The objective of this study was to evaluate the use of color vegetation indices (CVI) in canopy individualization processes of Pinus radiata. UAV flights were carried out, and a 3D dense point cloud and an orthomosaic were obtained. Then, a CVI was applied to 3D point cloud to differentiate between vegetation and nonvegetation classes to obtain a DEM and a CHM. Subsequently, an automatic crown identification procedure was applied to the CHM. The results were evaluated by contrasting them with results of manual individual tree identification on the UAV orthomosaic and those obtained by applying a progressive triangulated irregular network to the 3D point cloud. The results obtained indicate that the color information of 3D point clouds is an alternative to support individualizing trees under conditions of high-density vegetation. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems and Remote Sensing)
Show Figures

Graphical abstract

23 pages, 8573 KiB  
Article
UAV Photogrammetry under Poor Lighting Conditions—Accuracy Considerations
by Pawel Burdziakowski and Katarzyna Bobkowska
Sensors 2021, 21(10), 3531; https://0-doi-org.brum.beds.ac.uk/10.3390/s21103531 - 19 May 2021
Cited by 28 | Viewed by 4352
Abstract
The use of low-level photogrammetry is very broad, and studies in this field are conducted in many aspects. Most research and applications are based on image data acquired during the day, which seems natural and obvious. However, the authors of this paper draw [...] Read more.
The use of low-level photogrammetry is very broad, and studies in this field are conducted in many aspects. Most research and applications are based on image data acquired during the day, which seems natural and obvious. However, the authors of this paper draw attention to the potential and possible use of UAV photogrammetry during the darker time of the day. The potential of night-time images has not been yet widely recognized, since correct scenery lighting or lack of scenery light sources is an obvious issue. The authors have developed typical day- and night-time photogrammetric models. They have also presented an extensive analysis of the geometry, indicated which process element had the greatest impact on degrading night-time photogrammetric product, as well as which measurable factor directly correlated with image accuracy. The reduction in geometry during night-time tests was greatly impacted by the non-uniform distribution of GCPs within the study area. The calibration of non-metric cameras is sensitive to poor lighting conditions, which leads to the generation of a higher determination error for each intrinsic orientation and distortion parameter. As evidenced, uniformly illuminated photos can be used to construct a model with lower reprojection error, and each tie point exhibits greater precision. Furthermore, they have evaluated whether commercial photogrammetric software enabled reaching acceptable image quality and whether the digital camera type impacted interpretative quality. The research paper is concluded with an extended discussion, conclusions, and recommendation on night-time studies. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems and Remote Sensing)
Show Figures

Figure 1

Back to TopTop