remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing Image Restoration and Reconstruction

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 December 2019) | Viewed by 42730

Special Issue Editors

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
Interests: pattern analysis and machine learning; image processing engineering; application of remote sensing; computational intelligence and its application in remote sensing image processing
Special Issues, Collections and Topics in MDPI journals
School of Resource and Environmental Science, Wuhan University, Wuhan 430079, China
Interests: image quality improvement; remote sensing mapping and application; data fusion and assimilation; regional and global environmental changes
School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China
Interests: image reconstruction; image denoising; image super-resolution; remote sensing image processing; data fusion and application
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In real cases, remote sensing images usually suffer from noises (Gaussian noise, stripe noise, impulse noise, spectral noise, speckle noise, temporal noise, mixed noise, etc.); data missing (thick/thin cloud, shadow, sensor malfunction etc.); and spatial resolution degradation due to equipment limitations, working conditions, limited radiance energy, and generally narrow band width. These phenomena severely degrade the quality of remote sensing images and limit the performance of the subsequent processing, e.g., classification, unmixing, and target detection. Therefore, it is a critical preprocessing step to improve the quality of remote sensing images. Remote sensing image restoration and reconstructing provides solutions to deal with above degradation problems.

This Special Issue concerns the restoration and reconstructing methods and applications for processing remote sensing images. In general, in this Special Issue, the latest advances and trends of restoration and reconstructing algorithms and applications for remote sensing image processing will be presented, addressing novel thoughts and practical solutions to above questions. The aim is to increase the data usability and quality of remote sensing images. Moreover, authors are encouraged to present hybrid methods that might include the use of machine learning approaches. Topics of interest include but are not limited to the following:

  • Remote sensing image denoising;
  • Remote sensing image fusion;
  • Remote sensing image super resolution;
  • Remote sensing image missing data reconstruction;
  • Remote sensing image radiation correction;
  • Remote sensing image geometric correction;
  • Remote sensing image restoration.
Prof. Liangpei Zhang
Prof. Huanfeng Shen
Prof. Qiangqiang Yuan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multispectral image denoising
  • Hyperspectral image denoising
  • SAR image despeckling
  • Remote sensing image destriping
  • Remote sensing image restoration
  • Missing data reconstruction
  • Remote sensing image super-resolution
  • Pansharpening
  • Spatiotemporal fusion
  • Cloud/shadow removal

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 4653 KiB  
Article
Discriminative Feature Learning Constrained Unsupervised Network for Cloud Detection in Remote Sensing Imagery
by Weiying Xie, Jian Yang, Yunsong Li, Jie Lei, Jiaping Zhong and Jiaojiao Li
Remote Sens. 2020, 12(3), 456; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12030456 - 01 Feb 2020
Cited by 6 | Viewed by 2359
Abstract
Cloud detection is a significant preprocessing step for increasing the exploitability of remote sensing imagery that faces various levels of difficulty due to the complexity of underlying surfaces, insufficient training data, and redundant information in high-dimensional data. To solve these problems, we propose [...] Read more.
Cloud detection is a significant preprocessing step for increasing the exploitability of remote sensing imagery that faces various levels of difficulty due to the complexity of underlying surfaces, insufficient training data, and redundant information in high-dimensional data. To solve these problems, we propose an unsupervised network for cloud detection (UNCD) on multispectral (MS) and hyperspectral (HS) remote sensing images. The UNCD method enforces discriminative feature learning to obtain the residual error between the original input and the background in deep latent space, which is based on the observation that clouds are sparse and modeled as sparse outliers in remote sensing imagery. The UNCD enforces discriminative feature learning to obtain the residual error between the original input and the background in deep latent space, which is based on the observation that clouds are sparse and modeled as sparse outliers in remote sensing imagery. First, a compact representation of the original imagery is obtained by a latent adversarial learning constrained encoder. Meanwhile, the majority class with sufficient samples (i.e., background pixels) is more accurately reconstructed than the clouds with limited samples by the decoder. An image discriminator is used to prevent the generalization of out-of-class features caused by latent adversarial learning. To further highlight the background information in the deep latent space, a multivariate Gaussian distribution is introduced. In particular, the residual error with clouds highlighted and background samples suppressed is applied in the cloud detection in deep latent space. To evaluate the performance of the proposed UNCD method, experiments were conducted on both MS and HS datasets that were captured by various sensors over various scenes, and the results demonstrate its state-of-the-art performance. The sensors that captured the datasets include Landsat 8, GaoFen-1 (GF-1), and GaoFen-5 (GF-5). Landsat 8 was launched at Vandenberg Air Force Base in California on 11 February 2013, in a mission that was initially known as the Landsat Data Continuity Mission (LDCM). China launched the GF-1 satellite. The GF-5 satellite captures hyperspectral observations in the Chinese Key Projects of High-Resolution Earth Observation System. The overall accuracy (OA) values for Images I and II from the Landsat 8 dataset were 0.9526 and 0.9536, respectively, and the OA values for Images III and IV from the GF-1 wide field of view (WFV) dataset were 0.9957 and 0.9934, respectively. Hence, the proposed method outperformed the other considered methods. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

25 pages, 33035 KiB  
Article
Generating High-Quality and High-Resolution Seamless Satellite Imagery for Large-Scale Urban Regions
by Xinghua Li, Zhiwei Li, Ruitao Feng, Shuang Luo, Chi Zhang, Menghui Jiang and Huanfeng Shen
Remote Sens. 2020, 12(1), 81; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12010081 - 24 Dec 2019
Cited by 14 | Viewed by 5466
Abstract
Urban geographical maps are important to urban planning, urban construction, land-use studies, disaster control and relief, touring and sightseeing, and so on. Satellite remote sensing images are the most important data source for urban geographical maps. However, for optical satellite remote sensing images [...] Read more.
Urban geographical maps are important to urban planning, urban construction, land-use studies, disaster control and relief, touring and sightseeing, and so on. Satellite remote sensing images are the most important data source for urban geographical maps. However, for optical satellite remote sensing images with high spatial resolution, certain inevitable factors, including cloud, haze, and cloud shadow, severely degrade the image quality. Moreover, the geometrical and radiometric differences amongst multiple high-spatial-resolution images are difficult to eliminate. In this study, we propose a robust and efficient procedure for generating high-resolution and high-quality seamless satellite imagery for large-scale urban regions. This procedure consists of image registration, cloud detection, thin/thick cloud removal, pansharpening, and mosaicking processes. Methodologically, a spatially adaptive method considering the variation of atmospheric scattering, and a stepwise replacement method based on local moment matching are proposed for removing thin and thick clouds, respectively. The effectiveness is demonstrated by a successful case of generating a 0.91-m-resolution image of the main city zone in Nanning, Guangxi Zhuang Autonomous Region, China, using images obtained from the Chinese Beijing-2 and Gaofen-2 high-resolution satellites. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Figure 1

20 pages, 6854 KiB  
Article
Single Remote Sensing Image Dehazing Using a Prior-Based Dense Attentive Network
by Ziqi Gu, Zongqian Zhan, Qiangqiang Yuan and Li Yan
Remote Sens. 2019, 11(24), 3008; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11243008 - 13 Dec 2019
Cited by 32 | Viewed by 3817
Abstract
Remote sensing image dehazing is an extremely complex issue due to the irregular and non-uniform distribution of haze. In this paper, a prior-based dense attentive dehazing network (DADN) is proposed for single remote sensing image haze removal. The proposed network, which is constructed [...] Read more.
Remote sensing image dehazing is an extremely complex issue due to the irregular and non-uniform distribution of haze. In this paper, a prior-based dense attentive dehazing network (DADN) is proposed for single remote sensing image haze removal. The proposed network, which is constructed based on dense blocks and attention blocks, contains an encoder-decoder architecture, which enables it to directly learn the mapping between the input images and the corresponding haze-free image, without being dependent on the traditional atmospheric scattering model (ASM). To better handle non-uniform hazy remote sensing images, we propose to combine a haze density prior with deep learning, where an initial haze density map (HDM) is firstly extracted from the original hazy image, and is subsequently utilized as the input of the network, together with the original hazy image. Meanwhile, a large-scale hazy remote sensing dataset is created for training and testing of the proposed method, which contains both uniform and non-uniform, synthetic and real hazy remote sensing images. Experimental results on the created dataset illustrate that the developed dehazing method obtains significant progresses over the state-of-the-art methods. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

22 pages, 19146 KiB  
Article
Void Filling of Digital Elevation Models with a Terrain Texture Learning Model Based on Generative Adversarial Networks
by Zhonghang Qiu, Linwei Yue and Xiuguo Liu
Remote Sens. 2019, 11(23), 2829; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11232829 - 28 Nov 2019
Cited by 14 | Viewed by 5443
Abstract
Digital elevation models (DEMs) are an important information source for spatial modeling. However, data voids, which commonly exist in regions with rugged topography, result in incomplete DEM products, and thus significantly degrade DEM data quality. Interpolation methods are commonly used to fill voids [...] Read more.
Digital elevation models (DEMs) are an important information source for spatial modeling. However, data voids, which commonly exist in regions with rugged topography, result in incomplete DEM products, and thus significantly degrade DEM data quality. Interpolation methods are commonly used to fill voids of small sizes. For large-scale voids, multi-source fusion is an effective solution. Nevertheless, high-quality auxiliary source information is always difficult to retrieve in rugged mountainous areas. Thus, the void filling task is still a challenge. In this paper, we proposed a method based on a deep convolutional generative adversarial network (DCGAN) to address the problem of DEM void filling. A terrain texture generation model (TTGM) was constructed based on the DCGAN framework. Elevation, terrain slope, and relief degree composed the samples in the training set to better depict the terrain textural features of the DEM data. Moreover, the resize-convolution was utilized to replace the traditional deconvolution process to overcome the staircase in the generated data. The TTGM was trained on non-void SRTM (Shuttle Radar Topography Mission) 1-arc-second data patches in mountainous regions collected across the globe. Then, information neighboring the voids was involved in order to infer the latent encoding for the missing areas approximated to the distribution of training data. This was implemented with a loss function composed of pixel-wise, contextual, and perceptual constraints during the reconstruction process. The most appropriate fill surface generated by the TTGM was then employed to fill the voids, and Poisson blending was performed as a postprocessing step. Two models with different input sizes (64 × 64 and 128 × 128 pixels) were trained, so the proposed method can efficiently adapt to different sizes of voids. The experimental results indicate that the proposed method can obtain results with good visual perception and reconstruction accuracy, and is superior to classical interpolation methods. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

24 pages, 2645 KiB  
Article
Hyperspectral Pansharpening Based on Spectral Constrained Adversarial Autoencoder
by Gang He, Jiaping Zhong, Jie Lei, Yunsong Li and Weiying Xie
Remote Sens. 2019, 11(22), 2691; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11222691 - 18 Nov 2019
Cited by 12 | Viewed by 2682
Abstract
Hyperspectral (HS) imaging is conducive to better describing and understanding the subtle differences in spectral characteristics of different materials due to sufficient spectral information compared with traditional imaging systems. However, it is still challenging to obtain high resolution (HR) HS images in both [...] Read more.
Hyperspectral (HS) imaging is conducive to better describing and understanding the subtle differences in spectral characteristics of different materials due to sufficient spectral information compared with traditional imaging systems. However, it is still challenging to obtain high resolution (HR) HS images in both the spectral and spatial domains. Different from previous methods, we first propose spectral constrained adversarial autoencoder (SCAAE) to extract deep features of HS images and combine with the panchromatic (PAN) image to competently represent the spatial information of HR HS images, which is more comprehensive and representative. In particular, based on the adversarial autoencoder (AAE) network, the SCAAE network is built with the added spectral constraint in the loss function so that spectral consistency and a higher quality of spatial information enhancement can be ensured. Then, an adaptive fusion approach with a simple feature selection rule is induced to make full use of the spatial information contained in both the HS image and PAN image. Specifically, the spatial information from two different sensors is introduced into a convex optimization equation to obtain the fusion proportion of the two parts and estimate the generated HR HS image. By analyzing the results from the experiments executed on the tested data sets through different methods, it can be found that, in CC, SAM, and RMSE, the performance of the proposed algorithm is improved by about 1.42%, 13.12%, and 29.26% respectively on average which is preferable to the well-performed method HySure. Compared to the MRA-based method, the improvement of the proposed method in in the above three indexes is 17.63%, 0.83%, and 11.02%, respectively. Moreover, the results are 0.87%, 22.11%, and 20.66%, respectively, better than the PCA-based method, which fully illustrated the superiority of the proposed method in spatial information preservation. All the experimental results demonstrate that the proposed method is superior to the state-of-the-art fusion methods in terms of subjective and objective evaluations. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

22 pages, 39930 KiB  
Article
Deep Self-Learning Network for Adaptive Pansharpening
by Jie Hu, Zhi He and Jiemin Wu
Remote Sens. 2019, 11(20), 2395; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11202395 - 16 Oct 2019
Cited by 12 | Viewed by 3269
Abstract
Deep learning (DL)-based paradigms have recently made many advances in image pansharpening. However, most of the existing methods directly downscale the multispectral (MSI) and panchromatic (PAN) images with default blur kernel to construct the training set, which will lead to the deteriorative results [...] Read more.
Deep learning (DL)-based paradigms have recently made many advances in image pansharpening. However, most of the existing methods directly downscale the multispectral (MSI) and panchromatic (PAN) images with default blur kernel to construct the training set, which will lead to the deteriorative results when the real image does not obey this degradation. In this paper, a deep self-learning (DSL) network is proposed for adaptive image pansharpening. First, rather than using the fixed blur kernel, a point spread function (PSF) estimation algorithm is proposed to obtain the blur kernel of the MSI. Second, an edge-detection-based pixel-to-pixel image registration method is designed to recover the local misalignments between MSI and PAN. Third, the original data is downscaled by the estimated PSF and the pansharpening network is trained in the down-sampled domain. The high-resolution result can be finally predicted by the trained DSL network using the original MSI and PAN. Extensive experiments on three images collected by different satellites prove the superiority of our DSL technique, compared with some state-of-the-art approaches. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

19 pages, 5938 KiB  
Article
Spatial–Spectral Fusion in Different Swath Widths by a Recurrent Expanding Residual Convolutional Neural Network
by Jiang He, Jie Li, Qiangqiang Yuan, Huifang Li and Huanfeng Shen
Remote Sens. 2019, 11(19), 2203; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11192203 - 20 Sep 2019
Cited by 6 | Viewed by 2965
Abstract
The quality of remotely sensed images is usually determined by their spatial resolution, spectral resolution, and coverage. However, due to limitations in the sensor hardware, the spectral resolution, spatial resolution, and swath width of the coverage are mutually constrained. Remote sensing image fusion [...] Read more.
The quality of remotely sensed images is usually determined by their spatial resolution, spectral resolution, and coverage. However, due to limitations in the sensor hardware, the spectral resolution, spatial resolution, and swath width of the coverage are mutually constrained. Remote sensing image fusion aims at overcoming the different constraints of remote sensing images, to achieve the purpose of combining the useful information in the different images. However, the traditional spatial–spectral fusion approach is to use data in the same swath width that covers the same area and only considers the mutually constrained conditions between the spectral resolution and spatial resolution. To simultaneously solve the image fusion problems of the swath width, spatial resolution, and spectral resolution, this paper introduces a method with multi-scale feature extraction and residual learning with recurrent expanding. To discuss the sensitivity of convolution operation to different variables of images in different swath widths, we set the sensitivity experiments in the coverage ratio and offset position. We also performed the simulation and real experiments to verify the effectiveness of the proposed framework with the Sentinel-2 data, which simulated the different widths. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

21 pages, 4014 KiB  
Article
Underwater Image Restoration Based on a Parallel Convolutional Neural Network
by Keyan Wang, Yan Hu, Jun Chen, Xianyun Wu, Xi Zhao and Yunsong Li
Remote Sens. 2019, 11(13), 1591; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11131591 - 04 Jul 2019
Cited by 49 | Viewed by 4551
Abstract
Restoring degraded underwater images is a challenging ill-posed problem. The existing prior-based approaches have limited performance in many situations due to the reliance on handcrafted features. In this paper, we propose an effective convolutional neural network (CNN) for underwater image restoration. The proposed [...] Read more.
Restoring degraded underwater images is a challenging ill-posed problem. The existing prior-based approaches have limited performance in many situations due to the reliance on handcrafted features. In this paper, we propose an effective convolutional neural network (CNN) for underwater image restoration. The proposed network consists of two paralleled branches: a transmission estimation network (T-network) and a global ambient light estimation network (A-network); in particular, the T-network employs cross-layer connection and multi-scale estimation to prevent halo artifacts and to preserve edge features. The estimates produced by these two branches are leveraged to restore the clear image according to the underwater optical imaging model. Moreover, we develop a new underwater image synthesizing method for building the training datasets, which can simulate images captured in various underwater environments. Experimental results based on synthetic and real images demonstrate that our restored underwater images exhibit more natural color correction and better visibility improvement against several state-of-the-art methods. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

18 pages, 5141 KiB  
Article
Virtual Restoration of Stained Chinese Paintings Using Patch-Based Color Constrained Poisson Editing with Selected Hyperspectral Feature Bands
by Pingping Zhou, Miaole Hou, Shuqiang Lv, Xuesheng Zhao and Wangting Wu
Remote Sens. 2019, 11(11), 1384; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11111384 - 10 Jun 2019
Cited by 8 | Viewed by 3133
Abstract
Stains, as one of most common degradations of paper cultural relics, not only affect paintings’ appearance, but sometimes even cover the text, patterns, and colors contained in the relics. Virtual restorations based on common red–green–blue images (RGB) which remove the degradations and then [...] Read more.
Stains, as one of most common degradations of paper cultural relics, not only affect paintings’ appearance, but sometimes even cover the text, patterns, and colors contained in the relics. Virtual restorations based on common red–green–blue images (RGB) which remove the degradations and then fill the lacuna regions with the image’s known parts with the inpainting technology could produce a visually plausible result. However, due to the lack of information inside the degradations, they always yield inconsistent structures when stains cover several color materials. To effectively remove the stains and restore the covered original contents of Chinese paintings, a novel method based on Poisson editing is proposed by exploiting the information inside the degradations of selected three feature bands as the auxiliary information to guide the restoration since the selected feature bands captured fewer stains and could expose the covered information. To make the Poisson editing suitable for stain removal, the feature bands were also exploited to search for the optimal patch for the pixels in the stain region, and the searched patch was used to construct the color constraint on the original Poisson editing to ensure the restoration of the original color of paintings. Specifically, this method mainly consists of two steps: feature band selection from hyperspectral data by establishing rules and reconstruction of stain contaminated regions of RGB image with color constrained Poisson editing. Four Chinese paintings (‘Fishing’, ‘Crane and Banana’, ‘the Hui Nationality Painting’, and ‘Lotus Pond and Wild Goose’) with different color materials were used to test the performance of the proposed method. Visual results show that this method can effectively remove or dilute the stains while restoring a painting’s original colors. By comparing values of restored pixels with nonstained pixels (reference of their same color materials), images processed by the proposed method had the lowest average root mean square error (RMSE), normalized absolute error (NAE), and average differences (AD), which indicates that it is an effective method to restore the stains of Chinese paintings. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Figure 1

24 pages, 6307 KiB  
Article
Reconstructing Cloud Contaminated Pixels Using Spatiotemporal Covariance Functions and Multitemporal Hyperspectral Imagery
by Yoseline Angel, Rasmus Houborg and Matthew F. McCabe
Remote Sens. 2019, 11(10), 1145; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11101145 - 14 May 2019
Cited by 5 | Viewed by 3600
Abstract
One of the major challenges in optical-based remote sensing is the presence of clouds, which imposes a hard constraint on the use of multispectral or hyperspectral satellite imagery for earth observation. While some studies have used interpolation models to remove cloud affected data, [...] Read more.
One of the major challenges in optical-based remote sensing is the presence of clouds, which imposes a hard constraint on the use of multispectral or hyperspectral satellite imagery for earth observation. While some studies have used interpolation models to remove cloud affected data, relatively few aim at restoration via the use of multi-temporal reference images. This paper proposes not only the use of image time-series, but also the implementation of a geostatistical model that considers the spatiotemporal correlation between them to fill the cloud-related gaps. Using Hyperion hyperspectral images, we demonstrate a capacity to reconstruct cloud-affected pixels and predict their underlying surface reflectance values. To do this, cloudy pixels were masked and a parametric family of non-separable covariance functions was automated fitted, using a composite likelihood estimator. A subset of cloud-free pixels per scene was used to perform a kriging interpolation and to predict the spectral reflectance per each cloud-affected pixel. The approach was evaluated using a benchmark dataset of cloud-free pixels, with a synthetic cloud superimposed upon these data. An overall root mean square error (RMSE) of between 0.5% and 16% of the reflectance was achieved, representing a relative root mean square error (rRMSE) of between 0.2% and 7.5%. The spectral similarity between the predicted and reference reflectance signatures was described by a mean spectral angle (MSA) of between 1° and 11°, demonstrating the spatial and spectral coherence of predictions. The approach provides an efficient spatiotemporal interpolation framework for cloud removal, gap-filling, and denoising in remotely sensed datasets. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

19 pages, 8720 KiB  
Article
Domain Transfer Learning for Hyperspectral Image Super-Resolution
by Xiaoyan Li, Lefei Zhang and Jane You
Remote Sens. 2019, 11(6), 694; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11060694 - 22 Mar 2019
Cited by 14 | Viewed by 3931
Abstract
A Hyperspectral Image (HSI) contains a great number of spectral bands for each pixel; however, the spatial resolution of HSI is low. Hyperspectral image super-resolution is effective to enhance the spatial resolution while preserving the high-spectral-resolution by software techniques. Recently, the existing methods [...] Read more.
A Hyperspectral Image (HSI) contains a great number of spectral bands for each pixel; however, the spatial resolution of HSI is low. Hyperspectral image super-resolution is effective to enhance the spatial resolution while preserving the high-spectral-resolution by software techniques. Recently, the existing methods have been presented to fuse HSI and Multispectral Images (MSI) by assuming that the MSI of the same scene is required with the observed HSI, which limits the super-resolution reconstruction quality. In this paper, a new framework based on domain transfer learning for HSI super-resolution is proposed to enhance the spatial resolution of HSI by learning the knowledge from the general purpose optical images (natural scene images) and exploiting the cross-correlation between the observed low-resolution HSI and high-resolution MSI. First, the relationship between low- and high-resolution images is learned by a single convolutional super-resolution network and then is transferred to HSI by the idea of transfer learning. Second, the obtained Pre-high-resolution HSI (pre-HSI), the observed low-resolution HSI, and high-resolution MSI are simultaneously considered to estimate the endmember matrix and the abundance code for learning the spectral characteristic. Experimental results on ground-based and remote sensing datasets demonstrate that the proposed method achieves comparable performance and outperforms the existing HSI super-resolution methods. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Figure 1

Back to TopTop