remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing Image Super Resolution

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 6548

Special Issue Editors


E-Mail Website
Guest Editor
School of Artificial Intelligence, Beijing Normal University, Beijing 100875, China
Interests: remote sensing image processing; super-resolution reconstruction; visual saliency analysis; deep learning; object detection
Science Building, Beijing University of Technology, No. 100 Pingleyuan Road, Chaoyang Dist., Beijing 100124, China
Interests: remote sensing; image processing; Synthetic Aperture Radar applications; machine learning; multi-source information fusion

E-Mail Website
Guest Editor
1. CITAB, University of Trás-os-Montes and Alto Douro, Vila Real, Portugal
2. Algoritmi Center, University of Minho, 4800-058 Guimarães, Portugal
Interests: computer vision; machine learning; hyperspectral imaging; image classification; object detection
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The pursuit of high-resolution images to meet new challenges and needs never ceases in the field of remote sensing. Extensive applications of remote sensing images, such as fine-grained object classification, high precision object detection, and detailed land monitoring, have a growing demand for spatial resolution. Super-resolution aims to recover high-frequency details from low-resolution observations and is a challenging ill-posed problem. Although recent advances in machine learning have achieved tremendous improvements in super-resolution performance, there are still many challenges in handling real-world scenes, including unknown noise, blur kernels, and algorithm speed. This Special Issue will present the latest advances and trends of remote sensing image super-resolution algorithms and applications. Authors are encouraged to submit high-quality, original research papers on remote sensing image super-resolution. Topics of interest include but are not limited to the following:

  • Single-image super-resolution;
  • Multi-frame super-resolution;
  • Multispectral image super-resolution;
  • Hyperspectral image super-resolution;
  • Video Satellite Image super-resolution;
  • Spectral super-resolution;
  • Lightweight super-resolution model;
  • Pansharpening.

Prof. Dr. Libao Zhang
Dr. Yu Li
Prof. Dr Pedro Melo-Pinto
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • super-resolution
  • hyperspectral images
  • video satellite images
  • blind super-resolution
  • spectral super-resolution
  • lightweight super-resolution
  • restoration

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

28 pages, 19556 KiB  
Article
SEG-ESRGAN: A Multi-Task Network for Super-Resolution and Semantic Segmentation of Remote Sensing Images
by Luis Salgueiro, Javier Marcello and Verónica Vilaplana
Remote Sens. 2022, 14(22), 5862; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14225862 - 19 Nov 2022
Cited by 3 | Viewed by 2752
Abstract
The production of highly accurate land cover maps is one of the primary challenges in remote sensing, which depends on the spatial resolution of the input images. Sometimes, high-resolution imagery is not available or is too expensive to cover large areas or to [...] Read more.
The production of highly accurate land cover maps is one of the primary challenges in remote sensing, which depends on the spatial resolution of the input images. Sometimes, high-resolution imagery is not available or is too expensive to cover large areas or to perform multitemporal analysis. In this context, we propose a multi-task network to take advantage of the freely available Sentinel-2 imagery to produce a super-resolution image, with a scaling factor of 5, and the corresponding high-resolution land cover map. Our proposal, named SEG-ESRGAN, consists of two branches: the super-resolution branch, that produces Sentinel-2 multispectral images at 2 m resolution, and an encoder–decoder architecture for the semantic segmentation branch, that generates the enhanced land cover map. From the super-resolution branch, several skip connections are retrieved and concatenated with features from the different stages of the encoder part of the segmentation branch, promoting the flow of meaningful information to boost the accuracy in the segmentation task. Our model is trained with a multi-loss approach using a novel dataset to train and test the super-resolution stage, which is developed from Sentinel-2 and WorldView-2 image pairs. In addition, we generated a dataset with ground-truth labels for the segmentation task. To assess the super-resolution improvement, the PSNR, SSIM, ERGAS, and SAM metrics were considered, while to measure the classification performance, we used the IoU, confusion matrix and the F1-score. Experimental results demonstrate that the SEG-ESRGAN model outperforms different full segmentation and dual network models (U-Net, DeepLabV3+, HRNet and Dual_DeepLab), allowing the generation of high-resolution land cover maps in challenging scenarios using Sentinel-2 10 m bands. Full article
(This article belongs to the Special Issue Remote Sensing Image Super Resolution)
Show Figures

Figure 1

19 pages, 6388 KiB  
Article
On the Co-Selection of Vision Transformer Features and Images for Very High-Resolution Image Scene Classification
by Souleyman Chaib, Dou El Kefel Mansouri, Ibrahim Omara, Ahmed Hagag, Sahraoui Dhelim and Djamel Amar Bensaber
Remote Sens. 2022, 14(22), 5817; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14225817 - 17 Nov 2022
Cited by 7 | Viewed by 1974
Abstract
Recent developments in remote sensing technology have allowed us to observe the Earth with very high-resolution (VHR) images. VHR imagery scene classification is a challenging problem in the field of remote sensing. Vision transformer (ViT) models have achieved breakthrough results in image recognition [...] Read more.
Recent developments in remote sensing technology have allowed us to observe the Earth with very high-resolution (VHR) images. VHR imagery scene classification is a challenging problem in the field of remote sensing. Vision transformer (ViT) models have achieved breakthrough results in image recognition tasks. However, transformer–encoder layers encode different levels of features, where the latest layer represents semantic information, in contrast to the earliest layers, which contain more detailed data but ignore the semantic information of an image scene. In this paper, a new deep framework is proposed for VHR scene understanding by exploring the strengths of ViT features in a simple and effective way. First, pre-trained ViT models are used to extract informative features from the original VHR image scene, where the transformer–encoder layers are used to generate the feature descriptors of the input images. Second, we merged the obtained features as one signal data set. Third, some extracted ViT features do not describe well the image scenes, such as agriculture, meadows, and beaches, which could negatively affect the performance of the classification model. To deal with this challenge, we propose a new algorithm for feature- and image selection. Indeed, this gives us the possibility of eliminating the less important features and images, as well as those that are abnormal; based on the similarity of preserving the whole data set, we selected the most informative features and important images by dropping the irrelevant images that degraded the classification accuracy. The proposed method was tested on three VHR benchmarks. The experimental results demonstrate that the proposed method outperforms other state-of-the-art methods. Full article
(This article belongs to the Special Issue Remote Sensing Image Super Resolution)
Show Figures

Figure 1

Other

Jump to: Research

14 pages, 3773 KiB  
Technical Note
Comparison of Accelerated Versions of the Iterative Gradient Method to Ameliorate the Spatial Resolution of Microwave Radiometer Products
by Matteo Alparone, Ferdinando Nunziata, Claudio Estatico and Maurizio Migliaccio
Remote Sens. 2022, 14(20), 5246; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14205246 - 20 Oct 2022
Viewed by 1092
Abstract
In this study, the enhancement of the spatial resolution of microwave radiometer measurements is addressed by contrasting the accuracy of a gradient-like antenna pattern deconvolution method with its accelerated versions. The latter are methods that allow reaching a given accuracy with a reduced [...] Read more.
In this study, the enhancement of the spatial resolution of microwave radiometer measurements is addressed by contrasting the accuracy of a gradient-like antenna pattern deconvolution method with its accelerated versions. The latter are methods that allow reaching a given accuracy with a reduced number of iterations. The analysis points out that accelerated methods result in improved performance when dealing with spot-like discontinuities; while they perform in a similar way to the canonical gradient method in case of large discontinuities. A key application of such techniques is the research on global warming and climate change, which has recently gained critical importance in many scientific fields, mainly due to the huge societal and economic impact of such topics over the entire planet. In this context, the availability of reliable long time series of remotely sensed Earth data is of paramount importance to identify and study climate trends. Such data can be obtained by large-scale sensors, with the obvious drawback of a poor spatial resolution that strongly limits their applicability in regional studies. Iterative gradient techniques allow obtaining super-resolution gridded passive microwave products that can be used in long time series of consistently calibrated brightness temperature maps in support of climate studies. Full article
(This article belongs to the Special Issue Remote Sensing Image Super Resolution)
Show Figures

Figure 1

Back to TopTop