sensors-logo

Journal Browser

Journal Browser

Innovations in Photogrammetry and Remote Sensing: Modern Sensors, New Processing Strategies and Frontiers in Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (10 February 2022) | Viewed by 31035

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

The Special Issue aims for papers showing the progress made in key areas of photogrammetry and remote sensing. Papers focused on modern and/or forthcoming sensors, improvements in data processing strategies, and assessment of their reliability are welcome for the Special Issue. Additionally, the Special Issue aims to collect papers devoted to the application of such innovations as proof of the contribution offered in the observation of the natural and built environment and understanding of phenomena at required spatial scale. In particular, the following topics can be addressed in proposed submissions:

- Forthcoming sensors in photogrammetry and remote sensing

- Quality Assurance / Quality Control (QA/QC)

- Potentialities offered by multi-sensors data fusion

- Methodologies for near real-time mapping and monitoring from aerial/satellite platforms

- Big dataset handling

- Artificial Intelligence for data processing

- 3D modelling

- Error budget

- Novel approaches for processing of multi-temporal data

- Design, testing and applications of new sensors

Prof. Francesco Mancini
Prof. Francesco Pirotti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Photogrammetry
  • Remote sensing
  • Innovative sensors
  • Multi-sensor data fusion
  • Artificial Intelligence for data processing
  • 3D modelling
  • Error budget
  • Earth observation

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 163 KiB  
Editorial
Innovations in Photogrammetry and Remote Sensing: Modern Sensors, New Processing Strategies and Frontiers in Applications
by Francesco Mancini and Francesco Pirotti
Sensors 2021, 21(7), 2420; https://0-doi-org.brum.beds.ac.uk/10.3390/s21072420 - 01 Apr 2021
Cited by 5 | Viewed by 1924
Abstract
The recent development and rapid evolution of modern sensors and new processing strategies of collected data have paved the way for innovations in photogrammetry and remote sensing [...] Full article

Research

Jump to: Editorial

21 pages, 5686 KiB  
Article
Soil Moisture Content Retrieval from Remote Sensing Data by Artificial Neural Network Based on Sample Optimization
by Qixin Liu, Xingfa Gu, Xinran Chen, Faisal Mumtaz, Yan Liu, Chunmei Wang, Tao Yu, Yin Zhang, Dakang Wang and Yulin Zhan
Sensors 2022, 22(4), 1611; https://0-doi-org.brum.beds.ac.uk/10.3390/s22041611 - 18 Feb 2022
Cited by 3 | Viewed by 2310
Abstract
Soil moisture content (SMC) plays an essential role in geoscience research. The SMC can be retrieved using an artificial neural network (ANN) based on remote sensing data. The quantity and quality of samples for ANN training and testing are two critical factors that [...] Read more.
Soil moisture content (SMC) plays an essential role in geoscience research. The SMC can be retrieved using an artificial neural network (ANN) based on remote sensing data. The quantity and quality of samples for ANN training and testing are two critical factors that affect the SMC retrieving results. This study focused on sample optimization in both quantity and quality. On the one hand, a sparse sample exploitation (SSE) method was developed to solve the problem of sample scarcity, resultant from cloud obstruction in optical images and the malfunction of in situ SMC-measuring instruments. With this method, data typically excluded in conventional approaches can be adequately employed. On the other hand, apart from the basic input parameters commonly discussed in previous studies, a couple of new parameters were optimized to improve the feature description. The Sentinel-1 SAR and Landsat-8 images were adopted to retrieve SMC in the study area in eastern Austria. By the SSE method, the number of available samples increased from 264 to 635 for ANN training and testing, and the retrieval accuracy could be markedly improved. Furthermore, the optimized parameters also improve the inversion effect, and the elevation was the most influential input parameter. Full article
Show Figures

Figure 1

29 pages, 23555 KiB  
Article
UAV Block Geometry Design and Camera Calibration: A Simulation Study
by Riccardo Roncella and Gianfranco Forlani
Sensors 2021, 21(18), 6090; https://0-doi-org.brum.beds.ac.uk/10.3390/s21186090 - 11 Sep 2021
Cited by 9 | Viewed by 2390
Abstract
Acknowledged guidelines and standards such as those formerly governing project planning in analogue aerial photogrammetry are still missing in UAV photogrammetry. The reasons are many, from a great variety of projects goals to the number of parameters involved: camera features, flight plan design, [...] Read more.
Acknowledged guidelines and standards such as those formerly governing project planning in analogue aerial photogrammetry are still missing in UAV photogrammetry. The reasons are many, from a great variety of projects goals to the number of parameters involved: camera features, flight plan design, block control and georeferencing options, Structure from Motion settings, etc. Above all, perhaps, stands camera calibration with the alternative between pre- and on-the-job approaches. In this paper we present a Monte Carlo simulation study where the accuracy estimation of camera parameters and tie points’ ground coordinates is evaluated as a function of various project parameters. A set of UAV (Unmanned Aerial Vehicle) synthetic photogrammetric blocks, built by varying terrain shape, surveyed area shape, block control (ground and aerial), strip type (longitudinal, cross and oblique), image observation and control data precision has been synthetically generated, overall considering 144 combinations in on-the-job self-calibration. Bias in ground coordinates (dome effect) due to inaccurate pre-calibration has also been investigated. Under the test scenario, the accuracy gap between different block configurations can be close to an order of magnitude. Oblique imaging is confirmed as key requisite in flat terrain, while ground control density is not. Aerial control by accurate camera station positions is overall more accurate and efficient than GCP in flat terrain. Full article
Show Figures

Figure 1

20 pages, 25659 KiB  
Article
Histogram Adjustment of Images for Improving Photogrammetric Reconstruction
by Piotr Łabędź, Krzysztof Skabek, Paweł Ozimek and Mateusz Nytko
Sensors 2021, 21(14), 4654; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144654 - 07 Jul 2021
Cited by 8 | Viewed by 3425
Abstract
The accuracy of photogrammetric reconstruction depends largely on the acquisition conditions and on the quality of input photographs. This paper proposes methods of improving raster images that increase photogrammetric reconstruction accuracy. These methods are based on modifying color image histograms. Special emphasis was [...] Read more.
The accuracy of photogrammetric reconstruction depends largely on the acquisition conditions and on the quality of input photographs. This paper proposes methods of improving raster images that increase photogrammetric reconstruction accuracy. These methods are based on modifying color image histograms. Special emphasis was placed on the selection of channels of the RGB and CIE L*a*b* color models for further improvement of the reconstruction process. A methodology was proposed for assessing the quality of reconstruction based on premade reference models using positional statistics. The analysis of the influence of image enhancement on reconstruction was carried out for various types of objects. The proposed methods can significantly improve the quality of reconstruction. The superiority of methods based on the luminance channel of the L*a*b* model was demonstrated. Our studies indicated high efficiency of the histogram equalization method (HE), although these results were not highly distinctive for all performed tests. Full article
Show Figures

Figure 1

20 pages, 3284 KiB  
Article
High-Precision Automatic Calibration Modeling of Point Light Source Tracking Systems
by Ruijin Li, Liming Zhang, Xianhua Wang, Weiwei Xu, Xin Li, Jiawei Li and Chunhui Hu
Sensors 2021, 21(7), 2270; https://0-doi-org.brum.beds.ac.uk/10.3390/s21072270 - 24 Mar 2021
Cited by 1 | Viewed by 1683
Abstract
To realize high-precision and high-frequency unattended site calibration and detection of satellites, automatic direction adjustment must be implemented in mirror arrays. This paper proposes a high-precision automatic calibration model based on a novel point light source tracking system for mirror arrays. A camera [...] Read more.
To realize high-precision and high-frequency unattended site calibration and detection of satellites, automatic direction adjustment must be implemented in mirror arrays. This paper proposes a high-precision automatic calibration model based on a novel point light source tracking system for mirror arrays. A camera automatically observes the solar vector, and an observation equation coupling the image space and local coordinate systems is established. High-precision calibration of the system is realized through geometric error calculation of multipoint observation data. Moreover, model error analysis and solar tracking verification experiments are conducted. The standard deviations of the pitch angle and azimuth angle errors are 0.0176° and 0.0305°, respectively. The root mean square errors of the image centroid contrast are 2.0995 and 0.8689 pixels along the x- and y-axes, respectively. The corresponding pixel angular resolution errors are 0.0377° and 0.0144°, and the comprehensive angle resolution error is 0.0403°. The calculated model values are consistent with the measured data, validating the model. The proposed point light source tracking system can satisfy the requirements of high-resolution, high-precision, high-frequency on-orbit satellite radiometric calibration and modulation transfer function detection. Full article
Show Figures

Figure 1

21 pages, 12075 KiB  
Communication
Damage Proxy Map of the Beirut Explosion on 4th of August 2020 as Observed from the Copernicus Sensors
by Athos Agapiou
Sensors 2020, 20(21), 6382; https://0-doi-org.brum.beds.ac.uk/10.3390/s20216382 - 09 Nov 2020
Cited by 12 | Viewed by 4824
Abstract
On the 4th of August 2020, a massive explosion occurred in the harbor area of Beirut, Lebanon, killing more than 100 people and damaging numerous buildings in its proximity. The current article aims to showcase how open access and freely distributed satellite data, [...] Read more.
On the 4th of August 2020, a massive explosion occurred in the harbor area of Beirut, Lebanon, killing more than 100 people and damaging numerous buildings in its proximity. The current article aims to showcase how open access and freely distributed satellite data, such as those of the Copernicus radar and optical sensors, can deliver a damage proxy map of this devastating event. Sentinel-1 radar images acquired just prior (the 24th of July 2020) and after the event (5th of August 2020) were processed and analyzed, indicating areas with significant changes of the VV (vertical transmit, vertical receive) and VH (vertical transmit, horizontal receive) backscattering signal. In addition, an Interferometric Synthetic Aperture Radar (InSAR) analysis was performed for both descending (31st of July 2020 and 6th of August 2020) and ascending (29th of July 2020 and 10th of August 2020) orbits of Sentinel-1 images, indicating relative small ground displacements in the area near the harbor. Moreover, low coherence for these images is mapped around the blast zone. The current study uses the Hybrid Pluggable Processing Pipeline (HyP3) cloud-based system provided by the Alaska Satellite Facility (ASF) for the processing of the radar datasets. In addition, medium-resolution Sentinel-2 optical data were used to support thorough visual inspection and Principal Component Analysis (PCA) the damage in the area. While the overall findings are well aligned with other official reports found on the World Wide Web, which were mainly delivered by international space agencies, those reports were generated after the processing of either optical or radar datasets. In contrast, the current communication showcases how both optical and radar satellite data can be parallel used to map other devastating events. The use of open access and freely distributed Sentinel mission data was found very promising for delivering damage proxies maps after devastating events worldwide. Full article
Show Figures

Figure 1

24 pages, 22834 KiB  
Article
Surface Reconstruction Assessment in Photogrammetric Applications
by Erica Nocerino, Elisavet Konstantina Stathopoulou, Simone Rigon and Fabio Remondino
Sensors 2020, 20(20), 5863; https://0-doi-org.brum.beds.ac.uk/10.3390/s20205863 - 16 Oct 2020
Cited by 26 | Viewed by 3688
Abstract
The image-based 3D reconstruction pipeline aims to generate complete digital representations of the recorded scene, often in the form of 3D surfaces. These surfaces or mesh models are required to be highly detailed as well as accurate enough, especially for metric applications. Surface [...] Read more.
The image-based 3D reconstruction pipeline aims to generate complete digital representations of the recorded scene, often in the form of 3D surfaces. These surfaces or mesh models are required to be highly detailed as well as accurate enough, especially for metric applications. Surface generation can be considered as a problem integrated in the complete 3D reconstruction workflow and thus visibility information (pixel similarity and image orientation) is leveraged in the meshing procedure contributing to an optimal photo-consistent mesh. Other methods tackle the problem as an independent and subsequent step, generating a mesh model starting from a dense 3D point cloud or even using depth maps, discarding input image information. Out of the vast number of approaches for 3D surface generation, in this study, we considered three state of the art methods. Experiments were performed on benchmark and proprietary datasets of varying nature, scale, shape, image resolution and network designs. Several evaluation metrics were introduced and considered to present qualitative and quantitative assessment of the results. Full article
Show Figures

Graphical abstract

23 pages, 15685 KiB  
Article
Rough or Noisy? Metrics for Noise Estimation in SfM Reconstructions
by Ivan Nikolov and Claus Madsen
Sensors 2020, 20(19), 5725; https://0-doi-org.brum.beds.ac.uk/10.3390/s20195725 - 08 Oct 2020
Cited by 2 | Viewed by 3016
Abstract
Structure from Motion (SfM) can produce highly detailed 3D reconstructions, but distinguishing real surface roughness from reconstruction noise and geometric inaccuracies has always been a difficult problem to solve. Existing SfM commercial solutions achieve noise removal by a combination of aggressive global smoothing [...] Read more.
Structure from Motion (SfM) can produce highly detailed 3D reconstructions, but distinguishing real surface roughness from reconstruction noise and geometric inaccuracies has always been a difficult problem to solve. Existing SfM commercial solutions achieve noise removal by a combination of aggressive global smoothing and the reconstructed texture for smaller details, which is a subpar solution when the results are used for surface inspection. Other noise estimation and removal algorithms do not take advantage of all the additional data connected with SfM. We propose a number of geometrical and statistical metrics for noise assessment, based on both the reconstructed object and the capturing camera setup. We test the correlation of each of the metrics to the presence of noise on reconstructed surfaces and demonstrate that classical supervised learning methods, trained with these metrics can be used to distinguish between noise and roughness with an accuracy above 85%, with an additional 5–6% performance coming from the capturing setup metrics. Our proposed solution can easily be integrated into existing SfM workflows as it does not require more image data or additional sensors. Finally, as part of the testing we create an image dataset for SfM from a number of objects with varying shapes and sizes, which are available online together with ground truth annotations. Full article
Show Figures

Graphical abstract

21 pages, 4140 KiB  
Article
A Novel Change Detection Method for Natural Disaster Detection and Segmentation from Video Sequence
by Huijiao Qiao, Xue Wan, Youchuan Wan, Shengyang Li and Wanfeng Zhang
Sensors 2020, 20(18), 5076; https://0-doi-org.brum.beds.ac.uk/10.3390/s20185076 - 07 Sep 2020
Cited by 17 | Viewed by 2809
Abstract
Change detection (CD) is critical for natural disaster detection, monitoring and evaluation. Video satellites, new types of satellites being launched recently, are able to record the motion change during natural disasters. This raises a new problem for traditional CD methods, as they can [...] Read more.
Change detection (CD) is critical for natural disaster detection, monitoring and evaluation. Video satellites, new types of satellites being launched recently, are able to record the motion change during natural disasters. This raises a new problem for traditional CD methods, as they can only detect areas with highly changed radiometric and geometric information. Optical flow-based methods are able to detect the pixel-based motion tracking at fast speed; however, they are difficult to determine an optimal threshold for separating the changed from the unchanged part for CD problems. To overcome the above problems, this paper proposed a novel automatic change detection framework: OFATS (optical flow-based adaptive thresholding segmentation). Combining the characteristics of optical flow data, a new objective function based on the ratio of maximum between-class variance and minimum within-class variance has been constructed and two key steps are motion detection based on optical flow estimation using deep learning (DL) method and changed area segmentation based on an adaptive threshold selection. Experiments are carried out using two groups of video sequences, which demonstrated that the proposed method is able to achieve high accuracy with F1 value of 0.98 and 0.94, respectively. Full article
Show Figures

Figure 1

20 pages, 4903 KiB  
Article
Potential of Pléiades and WorldView-3 Tri-Stereo DSMs to Represent Heights of Small Isolated Objects
by Ana-Maria Loghin, Johannes Otepka-Schremmer and Norbert Pfeifer
Sensors 2020, 20(9), 2695; https://0-doi-org.brum.beds.ac.uk/10.3390/s20092695 - 09 May 2020
Cited by 20 | Viewed by 3513
Abstract
High-resolution stereo and multi-view imagery are used for digital surface model (DSM) derivation over large areas for numerous applications in topography, cartography, geomorphology, and 3D surface modelling. Dense image matching is a key component in 3D reconstruction and mapping, although the 3D reconstruction [...] Read more.
High-resolution stereo and multi-view imagery are used for digital surface model (DSM) derivation over large areas for numerous applications in topography, cartography, geomorphology, and 3D surface modelling. Dense image matching is a key component in 3D reconstruction and mapping, although the 3D reconstruction process encounters difficulties for water surfaces, areas with no texture or with a repetitive pattern appearance in the images, and for very small objects. This study investigates the capabilities and limitations of space-borne very high resolution imagery, specifically Pléiades (0.70 m) and WorldView-3 (0.31 m) imagery, with respect to the automatic point cloud reconstruction of small isolated objects. For this purpose, single buildings, vehicles, and trees were analyzed. The main focus is to quantify their detectability in the photogrammetrically-derived DSMs by estimating their heights as a function of object type and size. The estimated height was investigated with respect to the following parameters: building length and width, vehicle length and width, and tree crown diameter. Manually measured object heights from the oriented images were used as a reference. We demonstrate that the DSM-based estimated height of a single object strongly depends on its size, and we quantify this effect. Starting from very small objects, which are not elevated against their surroundings, and ending with large objects, we obtained a gradual increase of the relative heights. For small vehicles, buildings, and trees (lengths <7 pixels, crown diameters <4 pixels), the Pléiades-derived DSM showed less than 20% or none of the actual object’s height. For large vehicles, buildings, and trees (lengths >14 pixels, crown diameters >7 pixels), the estimated heights were higher than 60% of the real values. In the case of the WorldView-3 derived DSM, the estimated height of small vehicles, buildings, and trees (lengths <16 pixels, crown diameters <8 pixels) was less than 50% of their actual height, whereas larger objects (lengths >33 pixels, crown diameters >16 pixels) were reconstructed at more than 90% in height. Full article
Show Figures

Figure 1

Back to TopTop