Computation and Analysis of Remote Sensing Imagery and Image Motion

A special issue of Computation (ISSN 2079-3197). This special issue belongs to the section "Computational Engineering".

Deadline for manuscript submissions: closed (9 February 2022) | Viewed by 16220

Special Issue Editor


E-Mail Website
Guest Editor
I-SENSE Group of the Institute of Communication and Computer Systems (ICCS), 15773 Zografou, Greece
Interests: smart sensor-based situational awareness and integration; edge computing; crisis management; computer vision; image and point cloud processing; machine learning; earth observation; terrestrial and UAV embedded systems; IoT systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

The technological development in the fields of image processing, computer vision, and remote sensing provides new tools and automated solutions for several applications, such as situational awareness, urban development, change detection, forest and crop monitoring, emergency response, etc. 

This Special Issue invites manuscripts that present both academia and industry research for processing the information contained into remote sensing data. As this is a broad area, there are no constraints regarding the field of application. In this context, the aim of this Special Issue is to present the current state-of-the-art methods for the computation and analysis of remote sensing data and image motion in several fields of applications.

Dr. Evangelos Maltezos
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computation is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • object detection
  • object tracking
  • classification
  • machine learning
  • super resolution
  • human pose estimation
  • motion detection
  • satellite imagery
  • UAV imagery
  • terrestrial imagery

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 3531 KiB  
Article
Treetop Detection in Mountainous Forests Using UAV Terrain Awareness Function
by Orou Berme Herve Gonroudobou, Leonardo Huisacayna Silvestre, Yago Diez, Ha Trang Nguyen and Maximo Larry Lopez Caceres
Computation 2022, 10(6), 90; https://0-doi-org.brum.beds.ac.uk/10.3390/computation10060090 - 02 Jun 2022
Cited by 1 | Viewed by 1999
Abstract
Unmanned aerial vehicles (UAVs) are becoming essential tools for surveying and monitoring forest ecosystems. However, most forests are found on steep slopes, where capturing individual tree characteristics might be compromised by the difference in ground sampling distance (GSD) between slopes. Thus, we tested [...] Read more.
Unmanned aerial vehicles (UAVs) are becoming essential tools for surveying and monitoring forest ecosystems. However, most forests are found on steep slopes, where capturing individual tree characteristics might be compromised by the difference in ground sampling distance (GSD) between slopes. Thus, we tested the performance of treetop detection using two algorithms on canopy height models (CHMs) obtained with a commercial UAV (Mavic 2 Pro) using the terrain awareness function (TAF). The area surveyed was on a steep slope covered predominantly by fir (Abies mariesii) trees, where the UAV was flown following (TAF) and not following the terrain (NTAF). Results showed that when the TAF was used, fir trees were clearly delimited, with lower branches clearly visible in the orthomosaic, regardless of the slope position. As a result, the dense point clouds (DPCs) were denser and more homogenously distributed along the slope when using TAF than when using NTAF. Two algorithms were applied for treetop detection: (connected components), and (morphological operators). (connected components) showed a 5% improvement in treetop detection accuracy when using TAF (86.55%), in comparison to NTAF (81.55%), at the minimum matching error of 1 m. In contrast, when using (morphological operators), treetop detection accuracy reached 76.23% when using TAF and 62.06% when using NTAF. Thus, for treetop detection alone, NTAF can be sufficient when using sophisticated algorithms. However, NTAF showed a higher number of repeated points, leading to an overestimation of detected treetop. Full article
(This article belongs to the Special Issue Computation and Analysis of Remote Sensing Imagery and Image Motion)
Show Figures

Figure 1

12 pages, 4204 KiB  
Article
Classifying the Degree of Bark Beetle-Induced Damage on Fir (Abies mariesii) Forests, from UAV-Acquired RGB Images
by Tobias Leidemer, Orou Berme Herve Gonroudobou, Ha Trang Nguyen, Chiara Ferracini, Benjamin Burkhard, Yago Diez and Maximo Larry Lopez Caceres
Computation 2022, 10(4), 63; https://0-doi-org.brum.beds.ac.uk/10.3390/computation10040063 - 18 Apr 2022
Cited by 4 | Viewed by 2396
Abstract
Bark beetle outbreaks are responsible for the loss of large areas of forests and in recent years they appear to be increasing in frequency and magnitude as a result of climate change. The aim of this study is to develop a new standardized [...] Read more.
Bark beetle outbreaks are responsible for the loss of large areas of forests and in recent years they appear to be increasing in frequency and magnitude as a result of climate change. The aim of this study is to develop a new standardized methodology for the automatic detection of the degree of damage on single fir trees caused by bark beetle attacks using a simple GIS-based model. The classification approach is based on the degree of tree canopy defoliation observed (white pixels) in the UAV-acquired very high resolution RGB orthophotos. We defined six degrees (categories) of damage (healthy, four infested levels and dead) based on the ratio of white pixel to the total number of pixels of a given tree canopy. Category 1: <2.5% (no defoliation); Category 2: 2.5–10% (very low defoliation); Category 3: 10–25% (low defoliation); Category 4: 25–50% (medium defoliation); Category 5: 50–75% (high defoliation), and finally Category 6: >75% (dead). The definition of “white pixel” is crucial, since light conditions during image acquisition drastically affect pixel values. Thus, whiteness was defined as the ratio of red pixel value to the blue pixel value of every single pixel in relation to the ratio of the mean red and mean blue value of the whole orthomosaic. The results show that in an area of 4 ha, out of the 1376 trees, 277 were healthy, 948 were infested (Cat 2, 628; Cat 3, 244; Cat 4, 64; Cat 5, 12), and 151 were dead (Cat 6). The validation led to an average precision of 62%, with Cat 1 and Cat 6 reaching a precision of 73% and 94%, respectively. Full article
(This article belongs to the Special Issue Computation and Analysis of Remote Sensing Imagery and Image Motion)
Show Figures

Figure 1

20 pages, 1808 KiB  
Article
A Comparative Study of Autonomous Object Detection Algorithms in the Maritime Environment Using a UAV Platform
by Emmanuel Vasilopoulos, Georgios Vosinakis, Maria Krommyda, Lazaros Karagiannidis, Eleftherios Ouzounoglou and Angelos Amditis
Computation 2022, 10(3), 42; https://0-doi-org.brum.beds.ac.uk/10.3390/computation10030042 - 15 Mar 2022
Cited by 7 | Viewed by 3226
Abstract
Maritime operations rely heavily on surveillance and require reliable and timely data that can inform decisions and planning. Critical information in such cases includes the exact location of objects in the water, such as vessels, persons, and others. Due to the unique characteristics [...] Read more.
Maritime operations rely heavily on surveillance and require reliable and timely data that can inform decisions and planning. Critical information in such cases includes the exact location of objects in the water, such as vessels, persons, and others. Due to the unique characteristics of the maritime environment, the location of even inert objects changes through time, depending on the weather conditions, water currents, etc. Unmanned aerial vehicles (UAVs) can be used to support maritime operations by providing live video streams and images from the area of operations. Machine learning algorithms can be developed, trained, and used to automatically detect and track objects of specific types and characteristics. EFFECTOR is an EU-funded project, developing an Interoperability Framework for maritime surveillance. Within the project, we developed an embedded system that employs machine learning algorithms, allowing a UAV to autonomously detect objects in the water and keep track of their changing position through time. Using the on-board computation unit of the UAV, we ran and present the results of a series of comparative tests among possible architecture sizes and training datasets for the detection and tracking of objects in the maritime environment. We tested architectures based on their efficiency, accuracy, and speed. A combined solution for training the datasets is suggested, providing optimal efficiency and accuracy. Full article
(This article belongs to the Special Issue Computation and Analysis of Remote Sensing Imagery and Image Motion)
Show Figures

Figure 1

22 pages, 103397 KiB  
Article
A Video Analytics System for Person Detection Combined with Edge Computing
by Evangelos Maltezos, Panagiotis Lioupis, Aris Dadoukis, Lazaros Karagiannidis, Eleftherios Ouzounoglou, Maria Krommyda and Angelos Amditis
Computation 2022, 10(3), 35; https://0-doi-org.brum.beds.ac.uk/10.3390/computation10030035 - 25 Feb 2022
Cited by 10 | Viewed by 4204
Abstract
Ensuring citizens’ safety and security has been identified as the number one priority for city authorities when it comes to the use of smart city technologies. Automatic understanding of the scene, and the associated provision of situational awareness for emergency situations, are able [...] Read more.
Ensuring citizens’ safety and security has been identified as the number one priority for city authorities when it comes to the use of smart city technologies. Automatic understanding of the scene, and the associated provision of situational awareness for emergency situations, are able to efficiently contribute to such domains. In this study, a Video Analytics Edge Computing (VAEC) system is presented that performs real-time enhanced situation awareness for person detection in a video surveillance manner that is also able to share geolocated person detection alerts and other accompanied crucial information. The VAEC system adopts state-of-the-art object detection and tracking algorithms, and it is integrated with the proposed Distribute Edge Computing Internet of Things (DECIoT) platform. The aforementioned alerts and information are able to be shared, though the DECIoT, to smart city platforms utilizing proper middleware. To verify the utility and functionality of the VAEC system, extended experiments were performed (i) in several light conditions, (ii) using several camera sensors, and (iii) in several use cases, such as installed in fixed position of a building or mounted to a car. The results highlight the potential of VAEC system to be exploited by decision-makers or city authorities, providing enhanced situational awareness. Full article
(This article belongs to the Special Issue Computation and Analysis of Remote Sensing Imagery and Image Motion)
Show Figures

Figure 1

23 pages, 14820 KiB  
Article
Unimodal and Multimodal Perception for Forest Management: Review and Dataset
by Daniel Queirós da Silva, Filipe Neves dos Santos, Armando Jorge Sousa, Vítor Filipe and José Boaventura-Cunha
Computation 2021, 9(12), 127; https://0-doi-org.brum.beds.ac.uk/10.3390/computation9120127 - 29 Nov 2021
Cited by 9 | Viewed by 3287
Abstract
Robotics navigation and perception for forest management are challenging due to the existence of many obstacles to detect and avoid and the sharp illumination changes. Advanced perception systems are needed because they can enable the development of robotic and machinery solutions to accomplish [...] Read more.
Robotics navigation and perception for forest management are challenging due to the existence of many obstacles to detect and avoid and the sharp illumination changes. Advanced perception systems are needed because they can enable the development of robotic and machinery solutions to accomplish a smarter, more precise, and sustainable forestry. This article presents a state-of-the-art review about unimodal and multimodal perception in forests, detailing the current developed work about perception using a single type of sensors (unimodal) and by combining data from different kinds of sensors (multimodal). This work also makes a comparison between existing perception datasets in the literature and presents a new multimodal dataset, composed by images and laser scanning data, as a contribution for this research field. Lastly, a critical analysis of the works collected is conducted by identifying strengths and research trends in this domain. Full article
(This article belongs to the Special Issue Computation and Analysis of Remote Sensing Imagery and Image Motion)
Show Figures

Figure 1

Back to TopTop