Image Simulation in Remote Sensing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Physics General".

Deadline for manuscript submissions: closed (10 June 2021) | Viewed by 13452

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Advanced Technology Fusion, Konkuk University, Seoul 05029, Korea
Interests: RS image classification; RS image simulation

Special Issue Information

Dear Colleagues,

Large numbers of remotely-sensed images can be acquired from a diversity of multi-resolution sensors for distribution to a wide range of users. However, the atmospheric and environmental conditions present in the observed scene inevitably degrade the quality of remotely-sensed images. One method to overcome this limitation is by generating synthetic images through image simulation. Synthetic images can be generated by using statistical or knowledge-based models, or by using spectral and optic-based models to create a simulated image in place of the unobtained image at a specifically required time. Research on image simulation, or the generation of synthetic images, has rapidly been gaining interest. This Special Issue aims to attract novel contributions covering topics of interest which include, but are not limited to the following:

  • Image simulation at specific time when obstructed by weather and atmospheric effects
  • Image simulation using images from manufactured sensors
  • Image simulation under virtual atmospheric and environmental conditions
  • Optical image simulation from SAR images
  • Optical image simulation from optical images
  • Multi-resolution image simulation
  • Image conversion for multi-sensor images obtained from different acquisition methods
  • Image simulation by spectral resolution

Both theoretical and experiment-oriented papers, including case studies and reviews, are encouraged for submission.

Prof. Dr. YangDam Eo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • synthetic images
  • simulated images
  • multi-sensor
  • image conversion

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

3 pages, 164 KiB  
Editorial
Special Issue on Image Simulation in Remote Sensing
by Yang Dam Eo
Appl. Sci. 2021, 11(18), 8346; https://0-doi-org.brum.beds.ac.uk/10.3390/app11188346 - 09 Sep 2021
Viewed by 906
Abstract
Recently, various remote sensing sensors have been used and their performance has developed rapidly [...] Full article
(This article belongs to the Special Issue Image Simulation in Remote Sensing)

Research

Jump to: Editorial

16 pages, 4951 KiB  
Article
Sensor-Level Mosaic of Multistrip KOMPSAT-3 Level 1R Products
by Changno Lee and Jaehong Oh
Appl. Sci. 2021, 11(15), 6796; https://0-doi-org.brum.beds.ac.uk/10.3390/app11156796 - 23 Jul 2021
Cited by 1 | Viewed by 1407
Abstract
High-resolution satellite images such as KOMPSAT-3 data provide detailed geospatial information over interest areas that are evenly located in an inaccessible area. The high-resolution satellite cameras are designed with a long focal length and a narrow field of view to increase spatial resolution. [...] Read more.
High-resolution satellite images such as KOMPSAT-3 data provide detailed geospatial information over interest areas that are evenly located in an inaccessible area. The high-resolution satellite cameras are designed with a long focal length and a narrow field of view to increase spatial resolution. Thus, images show relatively narrow swath widths (10–15 km) compared to dozens or hundreds of kilometers in mid-/low-resolution satellite data. Therefore, users often face obstacles to orthorectify and mosaic a bundle of delivered images to create a complete image map. With a single mosaicked image at the sensor level delivered only with radiometric correction, users can process and manage simplified data more efficiently. Thus, we propose sensor-level mosaicking to generate a seamless image product with geometric accuracy to meet mapping requirements. Among adjacent image data with some overlaps, one image is the reference, whereas the others are projected using the sensor model information with shuttle radar topography mission. In the overlapped area, the geometric discrepancy between the data is modeled in spline along the image line based on image matching with outlier removals. The new sensor model information for the mosaicked image is generated by extending that of the reference image. Three strips of KOMPSAT-3 data were tested for the experiment. The data showed that irregular image discrepancies between the adjacent data were observed along the image line. This indicated that the proposed method successfully identified and removed these discrepancies. Additionally, sensor modeling information of the resulted mosaic could be improved by using the averaging effects of input data. Full article
(This article belongs to the Special Issue Image Simulation in Remote Sensing)
Show Figures

Figure 1

15 pages, 8388 KiB  
Article
Coupling Denoising to Detection for SAR Imagery
by Sujin Shin, Youngjung Kim, Insu Hwang, Junhee Kim and Sungho Kim
Appl. Sci. 2021, 11(12), 5569; https://0-doi-org.brum.beds.ac.uk/10.3390/app11125569 - 16 Jun 2021
Cited by 7 | Viewed by 1964
Abstract
Detecting objects in synthetic aperture radar (SAR) imagery has received much attention in recent years since SAR can operate in all-weather and day-and-night conditions. Due to the prosperity and development of convolutional neural networks (CNNs), many previous methodologies have been proposed for SAR [...] Read more.
Detecting objects in synthetic aperture radar (SAR) imagery has received much attention in recent years since SAR can operate in all-weather and day-and-night conditions. Due to the prosperity and development of convolutional neural networks (CNNs), many previous methodologies have been proposed for SAR object detection. In spite of the advance, existing detection networks still have limitations in boosting detection performance because of inherently noisy characteristics in SAR imagery; hence, separate preprocessing step such as denoising (despeckling) is required before utilizing the SAR images for deep learning. However, inappropriate denoising techniques might cause detailed information loss and even proper denoising methods does not always guarantee performance improvement. In this paper, we therefore propose a novel object detection framework that combines unsupervised denoising network into traditional two-stage detection network and leverages a strategy for fusing region proposals extracted from both raw SAR image and synthetically denoised SAR image. Extensive experiments validate the effectiveness of our framework on our own object detection datasets constructed with remote sensing images from TerraSAR-X and COSMO-SkyMed satellites. Extensive experiments validate the effectiveness of our framework on our own object detection datasets constructed with remote sensing images from TerraSAR-X and COSMO-SkyMed satellites. The proposed framework shows better performances when we compared the model with using only noisy SAR images and only denoised SAR images after despeckling under multiple backbone networks. Full article
(This article belongs to the Special Issue Image Simulation in Remote Sensing)
Show Figures

Figure 1

10 pages, 7522 KiB  
Article
Rotational-Shearing-Interferometer Response for a Star-Planet System without Star Cancellation
by Beethoven Bravo-Medina, Marija Strojnik, Azael Mora-Nuñez and Héctor Santiago-Hernández
Appl. Sci. 2021, 11(8), 3322; https://0-doi-org.brum.beds.ac.uk/10.3390/app11083322 - 07 Apr 2021
Cited by 5 | Viewed by 1409
Abstract
The Rotational Shearing Interforometer has been proposed for direct detection of extra-solar planets. This interferometer cancels the star radiation using destructive interference. However, the resulting signal is too small (few photons/s for each m2). We propose a novel method to enhance [...] Read more.
The Rotational Shearing Interforometer has been proposed for direct detection of extra-solar planets. This interferometer cancels the star radiation using destructive interference. However, the resulting signal is too small (few photons/s for each m2). We propose a novel method to enhance the signal magnitude by means of the star–planet interference when the star radiation is not cancelled. We use interferograms computationally simulated to confirm the viability of the technique. Full article
(This article belongs to the Special Issue Image Simulation in Remote Sensing)
Show Figures

Figure 1

25 pages, 72656 KiB  
Article
Automatic Generation of Aerial Orthoimages Using Sentinel-2 Satellite Imagery with a Context-Based Deep Learning Approach
by Suhong Yoo, Jisang Lee, Junsu Bae, Hyoseon Jang and Hong-Gyoo Sohn
Appl. Sci. 2021, 11(3), 1089; https://0-doi-org.brum.beds.ac.uk/10.3390/app11031089 - 25 Jan 2021
Cited by 5 | Viewed by 2097
Abstract
Aerial images are an outstanding option for observing terrain with their high-resolution (HR) capability. The high operational cost of aerial images makes it difficult to acquire periodic observation of the region of interest. Satellite imagery is an alternative for the problem, but low-resolution [...] Read more.
Aerial images are an outstanding option for observing terrain with their high-resolution (HR) capability. The high operational cost of aerial images makes it difficult to acquire periodic observation of the region of interest. Satellite imagery is an alternative for the problem, but low-resolution is an obstacle. In this study, we proposed a context-based approach to simulate the 10 m resolution of Sentinel-2 imagery to produce 2.5 and 5.0 m prediction images using the aerial orthoimage acquired over the same period. The proposed model was compared with an enhanced deep super-resolution network (EDSR), which has excellent performance among the existing super-resolution (SR) deep learning algorithms, using the peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and root-mean-squared error (RMSE). Our context-based ResU-Net outperformed the EDSR in all three metrics. The inclusion of the 60 m resolution of Sentinel-2 imagery performs better through fine-tuning. When 60 m images were included, RMSE decreased, and PSNR and SSIM increased. The result also validated that the denser the neural network, the higher the quality. Moreover, the accuracy is much higher when both denser feature dimensions and the 60 m images were used. Full article
(This article belongs to the Special Issue Image Simulation in Remote Sensing)
Show Figures

Figure 1

26 pages, 13643 KiB  
Article
Kinematic In Situ Self-Calibration of a Backpack-Based Multi-Beam LiDAR System
by Han Sae Kim, Yongil Kim, Changjae Kim and Kang Hyeok Choi
Appl. Sci. 2021, 11(3), 945; https://0-doi-org.brum.beds.ac.uk/10.3390/app11030945 - 21 Jan 2021
Cited by 3 | Viewed by 2334
Abstract
Light Detection and Ranging (LiDAR) remote sensing technology provides a more efficient means to acquire accurate 3D information from large-scale environments. Among the variety of LiDAR sensors, Multi-Beam LiDAR (MBL) sensors are one of the most extensively applied scanner types for mobile applications. [...] Read more.
Light Detection and Ranging (LiDAR) remote sensing technology provides a more efficient means to acquire accurate 3D information from large-scale environments. Among the variety of LiDAR sensors, Multi-Beam LiDAR (MBL) sensors are one of the most extensively applied scanner types for mobile applications. Despite the efficiency of these sensors, their observation accuracy is relatively low for effective use in mobile mapping applications, which require measurements at a higher level of accuracy. In addition, measurement instability of MBL demonstrates that frequent re-calibration is necessary to maintain a high level of accuracy. Therefore, frequent in situ calibration prior to data acquisition is an essential step in order to meet the accuracy-level requirements and to implement these scanners for precise mobile applications. In this study, kinematic in situ self-calibration of a backpack-based MBL system was investigated to develop an accurate backpack-based mobile mapping system. First, simulated datasets were generated for the experiments and tested in a controlled environment to inspect the minimum network configuration for self-calibration. For this purpose, our own-developed simulator program was first utilized to generate simulation datasets with various observation settings, network configurations, test sites, and targets. Afterwards, self-calibration was carried out using the simulation datasets. Second, real datasets were captured in a kinematic situation so as to compare the calibration results with the simulation experiments. The results demonstrate that the kinematic self-calibration of the backpack-based MBL system could improve the point cloud accuracy with Root Mean Square Error (RMSE) of planar misclosure up to 81%. Conclusively, in situ self-calibration of the backpack-based MBL system can be performed using on-site datasets, reaching the higher accuracy of point cloud. In addition, this method, by performing automatic calibration using the scan data, has the potential to be adapted to on-line re-calibration. Full article
(This article belongs to the Special Issue Image Simulation in Remote Sensing)
Show Figures

Figure 1

21 pages, 15412 KiB  
Article
A Learning-Based Image Fusion for High-Resolution SAR and Panchromatic Imagery
by Dae Kyo Seo and Yang Dam Eo
Appl. Sci. 2020, 10(9), 3298; https://0-doi-org.brum.beds.ac.uk/10.3390/app10093298 - 09 May 2020
Cited by 3 | Viewed by 2010
Abstract
Image fusion is an effective complementary method to obtain information from multi-source data. In particular, the fusion of synthetic aperture radar (SAR) and panchromatic images contributes to the better visual perception of objects and compensates for spatial information. However, conventional fusion methods fail [...] Read more.
Image fusion is an effective complementary method to obtain information from multi-source data. In particular, the fusion of synthetic aperture radar (SAR) and panchromatic images contributes to the better visual perception of objects and compensates for spatial information. However, conventional fusion methods fail to address the differences in imaging mechanism and, therefore, they cannot fully consider all information. Thus, this paper proposes a novel fusion method that both considers the differences in imaging mechanisms and sufficiently provides spatial information. The proposed method is learning-based; it first selects data to be used for learning. Then, to reduce the complexity, classification is performed on the stacked image, and the learning is performed independently for each class. Subsequently, to consider sufficient information, various features are extracted from the SAR image. Learning is performed based on the model’s ability to establish non-linear relationships, minimizing the differences in imaging mechanisms. It uses a representative non-linear regression model, random forest regression. Finally, the performance of the proposed method is evaluated by comparison with conventional methods. The experimental results show that the proposed method is superior in terms of visual and quantitative aspects, thus verifying its applicability. Full article
(This article belongs to the Special Issue Image Simulation in Remote Sensing)
Show Figures

Figure 1

Back to TopTop