remotesensing-logo

Journal Browser

Journal Browser

Advances in Remote Sensing Image Fusion

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 January 2020) | Viewed by 51742

Special Issue Editors

Signal Processing in Earth Observation, Technical University of Munich, 80333 Munich, Germany
Interests: remote sensing; data fusion; machine learning; geospatial data science
Special Issues, Collections and Topics in MDPI journals
Consulting, Breslauer Str. 48, 49088 Osnabrueck, Germany
Interests: remote sensing; image fusion; optical; radar; geocoding; quality assessment; monitoring; change detection; palm oil; tropical remote sensing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The origins of data fusion in remote sensing lie in remote sensing image fusion, and many exciting approaches for this task have already been summarized in interesting review papers.  However, today, we are increasingly living in the “golden era of Earth observation”, with ever-more sensors and increasing volumes of different kinds of data readily at our disposal. Concurrently, machine learning has become a standard tool in remote sensing data analysis, and powerful deep learning has opened thrilling new perspectives both for classic information extraction problems and for the fusion of different kinds of remote sensing imagery, in particular.  Thus, this Special Issue aims at the latest advances in remote sensing image fusion, driven by the mentioned recent developments.

In particular, we are encouraging the submission of papers related to the following methodical concepts:

  • Multi-sensor image fusion for artificial image synthetization
  • Multi-resolution image fusion and scale issues in image fusion
  • Pixel- and feature-based fusion for classification
  • Multi-temporal image fusion / change detection
  • Convolutional neural networks for image-to-image translation
  • Decision fusion of intermediate analysis results
  • Quality assessment of fused images
  • Achievements of image fusion

From a sensor-centered point of view, we seek submissions especially from the following fields:

  • Multispectral/hyperspectral image fusion
  • Pansharpening of multispectral/hyperspectral images
  • Fusion of SAR and optical images
  • Fusion of remote sensing images and other images
Dr. Michael Schmitt
Dr. Christine Pohl
Dr. Bo Huang 
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • data fusion
  • image fusion
  • multi-resolution
  • classification
  • image-to-image translation
  • decision fusion
  • pansharpening
  • multi-sensor fusion
  • change detection
  • spatio-spectral fusion
  • spatio-temporal fusion
  • accuracy assessment
  • fused image quality
  • scale

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

37 pages, 19459 KiB  
Article
Multi-Source and Multi-Temporal Image Fusion on Hypercomplex Bases
by Andreas Schmitt, Anna Wendleder, Rüdiger Kleynmans, Maximilian Hell, Achim Roth and Stefan Hinz
Remote Sens. 2020, 12(6), 943; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12060943 - 14 Mar 2020
Cited by 12 | Viewed by 3942
Abstract
This article spanned a new, consistent framework for production, archiving, and provision of analysis ready data (ARD) from multi-source and multi-temporal satellite acquisitions and an subsequent image fusion. The core of the image fusion was an orthogonal transform of the reflectance channels from [...] Read more.
This article spanned a new, consistent framework for production, archiving, and provision of analysis ready data (ARD) from multi-source and multi-temporal satellite acquisitions and an subsequent image fusion. The core of the image fusion was an orthogonal transform of the reflectance channels from optical sensors on hypercomplex bases delivered in Kennaugh-like elements, which are well-known from polarimetric radar. In this way, SAR and Optics could be fused to one image data set sharing the characteristics of both: the sharpness of Optics and the texture of SAR. The special properties of Kennaugh elements regarding their scaling—linear, logarithmic, normalized—applied likewise to the new elements and guaranteed their robustness towards noise, radiometric sub-sampling, and therewith data compression. This study combined Sentinel-1 and Sentinel-2 on an Octonion basis as well as Sentinel-2 and ALOS-PALSAR-2 on a Sedenion basis. The validation using signatures of typical land cover classes showed that the efficient archiving in 4 bit images still guaranteed an accuracy over 90% in the class assignment. Due to the stability of the resulting class signatures, the fuzziness to be caught by Machine Learning Algorithms was minimized at the same time. Thus, this methodology was predestined to act as new standard for ARD remote sensing data with an subsequent image fusion processed in so-called data cubes. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Fusion)
Show Figures

Graphical abstract

29 pages, 10441 KiB  
Article
An Object-Based Strategy for Improving the Accuracy of Spatiotemporal Satellite Imagery Fusion for Vegetation-Mapping Applications
by Hongcan Guan, Yanjun Su, Tianyu Hu, Jin Chen and Qinghua Guo
Remote Sens. 2019, 11(24), 2927; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11242927 - 06 Dec 2019
Cited by 11 | Viewed by 3126
Abstract
Spatiotemporal data fusion is a key technique for generating unified time-series images from various satellite platforms to support the mapping and monitoring of vegetation. However, the high similarity in the reflectance spectrum of different vegetation types brings an enormous challenge in the similar [...] Read more.
Spatiotemporal data fusion is a key technique for generating unified time-series images from various satellite platforms to support the mapping and monitoring of vegetation. However, the high similarity in the reflectance spectrum of different vegetation types brings an enormous challenge in the similar pixel selection procedure of spatiotemporal data fusion, which may lead to considerable uncertainties in the fusion. Here, we propose an object-based spatiotemporal data-fusion framework to replace the original similar pixel selection procedure with an object-restricted method to address this issue. The proposed framework can be applied to any spatiotemporal data-fusion algorithm based on similar pixels. In this study, we modified the spatial and temporal adaptive reflectance fusion model (STARFM), the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM) and the flexible spatiotemporal data-fusion model (FSDAF) using the proposed framework, and evaluated their performances in fusing Sentinel 2 and Landsat 8 images, Landsat 8 and Moderate-resolution Imaging Spectroradiometer (MODIS) images, and Sentinel 2 and MODIS images in a study site covered by grasslands, croplands, coniferous forests, and broadleaf forests. The results show that the proposed object-based framework can improve all three data-fusion algorithms significantly by delineating vegetation boundaries more clearly, and the improvements on FSDAF is the greatest among all three algorithms, which has an average decrease of 2.8% in relative root-mean-square error (rRMSE) in all sensor combinations. Moreover, the improvement on fusing Sentinel 2 and Landsat 8 images is more significant (an average decrease of 2.5% in rRMSE). By using the fused images generated from the proposed object-based framework, we can improve the vegetation mapping result by significantly reducing the “pepper-salt” effect. We believe that the proposed object-based framework has great potential to be used in generating time-series high-resolution remote-sensing data for vegetation mapping applications. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Fusion)
Show Figures

Graphical abstract

22 pages, 10469 KiB  
Article
Hyperspectral Image Super-Resolution via Adaptive Dictionary Learning and Double 1 Constraint
by Songze Tang, Yang Xu, Lili Huang and Le Sun
Remote Sens. 2019, 11(23), 2809; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11232809 - 27 Nov 2019
Cited by 6 | Viewed by 2981
Abstract
Hyperspectral image (HSI) super-resolution (SR) is an important technique for improving the spatial resolution of HSI. Recently, a method based on sparse representation improved the performance of HSI SR significantly. However, the spectral dictionary was learned under a fixed size, empirically, without considering [...] Read more.
Hyperspectral image (HSI) super-resolution (SR) is an important technique for improving the spatial resolution of HSI. Recently, a method based on sparse representation improved the performance of HSI SR significantly. However, the spectral dictionary was learned under a fixed size, empirically, without considering the training data. Moreover, most of the existing methods fail to explore the relationship among the sparse coefficients. To address these crucial issues, an effective method for HSI SR is proposed in this paper. First, a spectral dictionary is learned, which can adaptively estimate a suitable size according to the input HSI without any prior information. Then, the proposed method exploits the nonlocal correlation of the sparse coefficients. Double 1 regularized sparse representation is then introduced to achieve better reconstructions for HSI SR. Finally, a high spatial resolution HSI is generated by the obtained coefficients matrix and the learned adaptive size spectral dictionary. To evaluate the performance of the proposed method, we conduct experiments on two famous datasets. The experimental results demonstrate that it can outperform some relatively state-of-the-art methods in terms of the popular universal quality evaluation indexes. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Fusion)
Show Figures

Graphical abstract

19 pages, 14147 KiB  
Article
SAR-to-Optical Image Translation Based on Conditional Generative Adversarial Networks—Optimization, Opportunities and Limits
by Mario Fuentes Reyes, Stefan Auer, Nina Merkle, Corentin Henry and Michael Schmitt
Remote Sens. 2019, 11(17), 2067; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11172067 - 03 Sep 2019
Cited by 104 | Viewed by 10741
Abstract
Due to its all time capability, synthetic aperture radar (SAR) remote sensing plays an important role in Earth observation. The ability to interpret the data is limited, even for experts, as the human eye is not familiar to the impact of distance-dependent imaging, [...] Read more.
Due to its all time capability, synthetic aperture radar (SAR) remote sensing plays an important role in Earth observation. The ability to interpret the data is limited, even for experts, as the human eye is not familiar to the impact of distance-dependent imaging, signal intensities detected in the radar spectrum as well as image characteristics related to speckle or steps of post-processing. This paper is concerned with machine learning for SAR-to-optical image-to-image translation in order to support the interpretation and analysis of original data. A conditional adversarial network is adopted and optimized in order to generate alternative SAR image representations based on the combination of SAR images (starting point) and optical images (reference) for training. Following this strategy, the focus is set on the value of empirical knowledge for initialization, the impact of results on follow-up applications, and the discussion of opportunities/drawbacks related to this application of deep learning. Case study results are shown for high resolution (SAR: TerraSAR-X, optical: ALOS PRISM) and low resolution (Sentinel-1 and -2) data. The properties of the alternative image representation are evaluated based on feedback from experts in SAR remote sensing and the impact on road extraction as an example for follow-up applications. The results provide the basis to explain fundamental limitations affecting the SAR-to-optical image translation idea but also indicate benefits from alternative SAR image representations. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Fusion)
Show Figures

Graphical abstract

27 pages, 6834 KiB  
Article
Coupled Higher-Order Tensor Factorization for Hyperspectral and LiDAR Data Fusion and Classification
by Zhaohui Xue, Sirui Yang, Hongyan Zhang and Peijun Du
Remote Sens. 2019, 11(17), 1959; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11171959 - 21 Aug 2019
Cited by 7 | Viewed by 2901
Abstract
Hyperspectral and light detection and ranging (LiDAR) data fusion and classification has been an active research topic, and intensive studies have been made based on mathematical morphology. However, matrix-based concatenation of morphological features may not be so distinctive, compact, and optimal for classification. [...] Read more.
Hyperspectral and light detection and ranging (LiDAR) data fusion and classification has been an active research topic, and intensive studies have been made based on mathematical morphology. However, matrix-based concatenation of morphological features may not be so distinctive, compact, and optimal for classification. In this work, we propose a novel Coupled Higher-Order Tensor Factorization (CHOTF) model for hyperspectral and LiDAR data classification. The innovative contributions of our work are that we model different features as multiple third-order tensors, and we formulate a CHOTF model to jointly factorize those tensors. Firstly, third-order tensors are built based on spectral-spatial features extracted via attribute profiles (APs). Secondly, the CHOTF model is defined to jointly factorize the multiple higher-order tensors. Then, the latent features are generated by mode-n tensor-matrix product based on the shared and unshared factors. Lastly, classification is conducted by using sparse multinomial logistic regression (SMLR). Experimental results, conducted with two popular hyperspectral and LiDAR data sets collected over the University of Houston and the city of Trento, respectively, indicate that the proposed framework outperforms the other methods, i.e., different dimensionality-reduction-based methods, independent third-order tensor factorization based methods, and some recently proposed hyperspectral and LiDAR data fusion and classification methods. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Fusion)
Show Figures

Figure 1

20 pages, 26836 KiB  
Article
Improvement of Clustering Methods for Modelling Abrupt Land Surface Changes in Satellite Image Fusions
by Detang Zhong and Fuqun Zhou
Remote Sens. 2019, 11(15), 1759; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11151759 - 26 Jul 2019
Cited by 13 | Viewed by 3472
Abstract
A key challenge in developing models for the fusion of surface reflectance data across multiple satellite sensors is ensuring that they apply to both gradual vegetation phenological dynamics and abrupt land surface changes. To better model land cover spatial and temporal changes, we [...] Read more.
A key challenge in developing models for the fusion of surface reflectance data across multiple satellite sensors is ensuring that they apply to both gradual vegetation phenological dynamics and abrupt land surface changes. To better model land cover spatial and temporal changes, we proposed previously a Prediction Smooth Reflectance Fusion Model (PSRFM) that combines a dynamic prediction model based on the linear spectral mixing model with a smoothing filter corresponding to the weighted average of forward and backward temporal predictions. One of the significant advantages of PSRFM is that PSRFM can model abrupt land surface changes either through optimized clusters or the residuals of the predicted gradual changes. In this paper, we expanded our approach and developed more efficient methods for clustering. We applied the new methods for dramatic land surface changes caused by a flood and a forest fire. Comparison of the model outputs showed that the new methods can capture the land surface changes more effectively. We also compared the improved PSRFM to two most popular reflectance fusion algorithms: Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and Enhanced version of STARFM (ESTARFM). The results showed that the improved PSRFM is more effective and outperforms STARFM and ESTARFM both visually and quantitatively. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Fusion)
Show Figures

Graphical abstract

18 pages, 5990 KiB  
Article
Generating Red-Edge Images at 3 M Spatial Resolution by Fusing Sentinel-2 and Planet Satellite Products
by Wei Li, Jiale Jiang, Tai Guo, Meng Zhou, Yining Tang, Ying Wang, Yu Zhang, Tao Cheng, Yan Zhu, Weixing Cao and Xia Yao
Remote Sens. 2019, 11(12), 1422; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11121422 - 14 Jun 2019
Cited by 21 | Viewed by 5041
Abstract
High-resolution satellite images can be used to some extent to mitigate the mixed-pixel problem caused by the lack of intensive production, farmland fragmentation, and the uneven growth of field crops in developing countries. Specifically, red-edge (RE) satellite images can be used in this [...] Read more.
High-resolution satellite images can be used to some extent to mitigate the mixed-pixel problem caused by the lack of intensive production, farmland fragmentation, and the uneven growth of field crops in developing countries. Specifically, red-edge (RE) satellite images can be used in this context to reduce the influence of soil background at early stages as well as saturation due to crop leaf area index (LAI) at later stages. However, the availability of high-resolution RE satellite image products for research and application globally remains limited. This study uses the weight-and-unmixing algorithm as well as the SUPer-REsolution for multi-spectral Multi-resolution Estimation (Wu-SupReME) approach to combine the advantages of Sentinel-2 spectral and Planet spatial resolution and generate a high-resolution RE product. The resultant fused image is highly correlated (R2 > 0.98) with Sentinel-2 image and clearly illustrates the persistent advantages of such products. This fused image was significantly more accurate than the originals when used to predict heterogeneous wheat LAI and therefore clearly illustrated the persistence of Sentinel-2 spectral and Planet spatial advantage, which indirectly proved that the fusion methodology of generating high-resolution red-edge products from Planet and Sentinel-2 images is possible. This study provided method reference for multi-source data fusion and image product for accurate parameter inversion in quantitative remote sensing of vegetation. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Fusion)
Show Figures

Graphical abstract

20 pages, 2832 KiB  
Article
Enhanced Back-Projection as Postprocessing for Pansharpening
by Junmin Liu, Jing Ma, Rongrong Fei, Huirong Li and Jiangshe Zhang
Remote Sens. 2019, 11(6), 712; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11060712 - 25 Mar 2019
Cited by 8 | Viewed by 3277
Abstract
Pansharpening is the process of integrating a high spatial resolution panchromatic image with a low spatial resolution multispectral image to obtain a multispectral image with high spatial and spectral resolution. Over the last decade, several algorithms have been developed for pansharpening. In this [...] Read more.
Pansharpening is the process of integrating a high spatial resolution panchromatic image with a low spatial resolution multispectral image to obtain a multispectral image with high spatial and spectral resolution. Over the last decade, several algorithms have been developed for pansharpening. In this paper, a technique, called enhanced back-projection (EBP), is introduced and applied as postprocessing on the pansharpening. The proposed EBP first enhances the spatial details of the pansharpening results by histogram matching and high-pass modulation, followed by a back-projection process, which takes into account the modulation transfer function (MTF) of the satellite sensor such that the pansharpening results obey the consistency property. The EBP is validated on four datasets acquired by different satellites and several commonly used pansharpening methods. The pansharpening results achieve substantial improvements by this postprocessing technique, which is widely applicable and requires no modification of existing pansharpening methods. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Fusion)
Show Figures

Figure 1

Review

Jump to: Research

20 pages, 1333 KiB  
Review
Spatiotemporal Image Fusion in Remote Sensing
by Mariana Belgiu and Alfred Stein
Remote Sens. 2019, 11(7), 818; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070818 - 04 Apr 2019
Cited by 127 | Viewed by 15111
Abstract
In this paper, we discuss spatiotemporal data fusion methods in remote sensing. These methods fuse temporally sparse fine-resolution images with temporally dense coarse-resolution images. This review reveals that existing spatiotemporal data fusion methods are mainly dedicated to blending optical images. There is a [...] Read more.
In this paper, we discuss spatiotemporal data fusion methods in remote sensing. These methods fuse temporally sparse fine-resolution images with temporally dense coarse-resolution images. This review reveals that existing spatiotemporal data fusion methods are mainly dedicated to blending optical images. There is a limited number of studies focusing on fusing microwave data, or on fusing microwave and optical images in order to address the problem of gaps in the optical data caused by the presence of clouds. Therefore, future efforts are required to develop spatiotemporal data fusion methods flexible enough to accomplish different data fusion tasks under different environmental conditions and using different sensors data as input. The review shows that additional investigations are required to account for temporal changes occurring during the observation period when predicting spectral reflectance values at a fine scale in space and time. More sophisticated machine learning methods such as convolutional neural network (CNN) represent a promising solution for spatiotemporal fusion, especially due to their capability to fuse images with different spectral values. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Fusion)
Show Figures

Figure 1

Back to TopTop