remotesensing-logo

Journal Browser

Journal Browser

Multispectral Image Acquisition, Processing and Analysis—2nd Edition

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 17392

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Electronics and Telecommunications IETR UMR CNRS 6164, University of Rennes, 22305 Lannion, France
Interests: blind estimation of degradation characteristics (noise, PSF); blind restoration of multicomponent images; multimodal image correction; multicomponent image compression; multi-channel adaptive processing of signals and images; unsupervised machine learning and deep learning; multi-mode remote sensing data processing; remote sensing
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor

Special Issue Information

Dear Colleagues,

Due to the continual advances in lightweight and less expensive versions of multispectral sensors and remote sensing platform technology in recent years, end-users are provided with a multitude of timely observational capabilities for the better sensing and monitoring of the Earth’s surface.

To benefit from the full potential of these ever-advancing productive systems in a more flexible and smart manner in many applied fields, we need to continue to improve our analysis and processing capabilities accordingly. Joint efforts for fully automated, easy-to-use and efficient systems are a key direction for facilitating and maturing the operational use of remote sensing.

This Special Issue is thus intended to cover the last advances in the following primary topics of interest (but not limited to these) related to Multispectral Image Acquisition, Processing and Analysis:

  • State-of-the-art and emerging multispectral technologies, including new platforms (satellites, aerials and unmanned aerial vehicles) and sensors with:
    • Spatial, spectral and temporal sensing abilities;
    • Georeferencing and navigation abilities;
    • Cooperative sensing;
  • Advanced multispectral image/data analysis and processing:
    • Lossless/lossy compression and denoising;
    • Geometrical, registration and georeferencing processing;
    • Feature extraction, classification, object recognition, change detection and domain adaptation;
  • Multisource data fusion:
    • Optical–radar fusion and pan-sharpening;
    • Field sensing;
    • Crowd sensing.

A wide spectrum of recent applications highlighting Multispectral Image Acquisition, Processing and Analysis are targeted including biodiversity assessment, vegetation and environmental monitoring (the identification of diversity in grassland species, invasive plants, biomass estimation and wetlands), precision agriculture in agricultural ecosystems and crop management, water resource and quality management in nearshore coastal (mapping near-surface water constituents and benthic habitats) and inland waters (the analysis and surveying of rivers and lakes), sustainable forestry and agroforestry (forest preservation, the mapping of forest species and wildfire detection), mapping archaeological areas, urban development and management, and hazard monitoring.

The first edition of this Special Issue can be found at https://0-www-mdpi-com.brum.beds.ac.uk/journal/remotesensing/special_issues/Multispectral_Image

Dr. Benoit Vozel
Prof. Dr. Vladimir Lukin
Prof. Dr. Yakoub Bazi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Imaging sensors and platforms
  • Cooperative sensing
  • Multispectral data analysis
  • Multispectral data processing
  • Multisource data fusion
  • Deep learning strategies

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 72434 KiB  
Article
Unified Interpretable Deep Network for Joint Super-Resolution and Pansharpening
by Dian Yu, Wei Zhang, Mingzhu Xu, Xin Tian and Hao Jiang
Remote Sens. 2024, 16(3), 540; https://0-doi-org.brum.beds.ac.uk/10.3390/rs16030540 - 31 Jan 2024
Viewed by 469
Abstract
Joint super-resolution and pansharpening (JSP) brings new insight into the spatial improvement of multispectral images. How to efficiently balance the spatial and spectral qualities in JSP is important for deep learning-based approaches. To address this problem, we propose a unified interpretable deep network [...] Read more.
Joint super-resolution and pansharpening (JSP) brings new insight into the spatial improvement of multispectral images. How to efficiently balance the spatial and spectral qualities in JSP is important for deep learning-based approaches. To address this problem, we propose a unified interpretable deep network for JSP, named UIJSP-Net. First, we formulate the JSP problem as an optimization problem in a specially designed physical model based on the relationship among the JSP result, the multispectral image, and the panchromatic image. In particular, two deep priors are utilized to describe latent distributions of different variables, which can improve the accuracy of the physical model. Furthermore, we adopt the alternating direction method of multipliers to solve the above optimization problem, where a series of iterative steps are generated. Finally, we design UIJSP-Net by unfolding these iterative steps into multiple corresponding stages in a unified network. Because UIJSP-Net has clear physical meanings, the spatial resolution of multispectral images can be efficiently improved while the spectral information can be kept as well. Extensive experimental results are carried out on both simulated and real datasets to demonstrate the superiority of UIJSP-Net over other state-of-the-art methods from qualitative and quantitative aspects. Full article
Show Figures

Figure 1

23 pages, 12875 KiB  
Article
Multiscale Fusion of Panchromatic and Multispectral Images Based on Adaptive Iterative Filtering
by Zhiqi Zhang, Jun Xu, Xinhui Wang, Guangqi Xie and Lu Wei
Remote Sens. 2024, 16(1), 7; https://0-doi-org.brum.beds.ac.uk/10.3390/rs16010007 - 19 Dec 2023
Cited by 1 | Viewed by 674
Abstract
This paper proposes an efficient and high-fidelity image fusion method based on adaptive smoothing filtering for panchromatic (PAN) and multispectral (MS) image fusion. The scale ratio reflects the ratio of spatial resolution between the panchromatic image and the multispectral image. When facing a [...] Read more.
This paper proposes an efficient and high-fidelity image fusion method based on adaptive smoothing filtering for panchromatic (PAN) and multispectral (MS) image fusion. The scale ratio reflects the ratio of spatial resolution between the panchromatic image and the multispectral image. When facing a multiscale fusion task, traditional methods are unable to simultaneously handle the problems of spectral resolution loss resulting from high scale ratios and the issue of reduced spatial resolution due to low scale ratios. To adapt to the fusion of panchromatic and multispectral satellite images of different scales, this paper improves the problem of the insufficient filtering of high-frequency information of remote sensing images of different scales by the classic filter-based intensity modulation (SFIM) model. It uses Gaussian convolution kernels instead of traditional mean convolution kernels and builds a Gaussian pyramid to adaptively construct convolution kernels of different scales to filter out high-frequency information of high-resolution images. It can adaptively process panchromatic multispectral images of different scales, iteratively filter the spatial information in panchromatic images, and ensure that the scale transformation is consistent with the definition of multispectral images. Using 15 common fusion methods, this paper compares the experimental results of ZY-3 with scale ratio 2.7 and SV-1 with scale ratio 4 data. The results show that the method proposed in this paper retains good spatial information for image fusion at different scales and has good spectral preservation. Full article
Show Figures

Figure 1

18 pages, 6031 KiB  
Article
UCTNet with Dual-Flow Architecture: Snow Coverage Mapping with Sentinel-2 Satellite Imagery
by Jinge Ma, Haoran Shen, Yuanxiu Cai, Tianxiang Zhang, Jinya Su, Wen-Hua Chen and Jiangyun Li
Remote Sens. 2023, 15(17), 4213; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15174213 - 27 Aug 2023
Cited by 1 | Viewed by 919
Abstract
Satellite remote sensing (RS) has been drawing considerable research interest in land-cover classification due to its low price, short revisit time, and large coverage. However, clouds pose a significant challenge, occluding the objects on satellite RS images. In addition, snow coverage mapping plays [...] Read more.
Satellite remote sensing (RS) has been drawing considerable research interest in land-cover classification due to its low price, short revisit time, and large coverage. However, clouds pose a significant challenge, occluding the objects on satellite RS images. In addition, snow coverage mapping plays a vital role in studying hydrology and climatology and investigating crop disease overwintering for smart agriculture. Distinguishing snow from clouds is challenging since they share similar color and reflection characteristics. Conventional approaches with manual thresholding and machine learning algorithms (e.g., SVM and Random Forest) could not fully extract useful information, while current deep-learning methods, e.g., CNNs or Transformer models, still have limitations in fully exploiting abundant spatial/spectral information of RS images. Therefore, this work aims to develop an efficient snow and cloud classification algorithm using satellite multispectral RS images. In particular, we propose an innovative algorithm entitled UCTNet by adopting a dual-flow structure to integrate information extracted via Transformer and CNN branches. Particularly, CNN and Transformer integration Module (CTIM) is designed to maximally integrate the information extracted via two branches. Meanwhile, Final Information Fusion Module and Auxiliary Information Fusion Head are designed for better performance. The four-band satellite multispectral RS dataset for snow coverage mapping is adopted for performance evaluation. Compared with previous methods (e.g., U-Net, Swin, and CSDNet), the experimental results show that the proposed UCTNet achieves the best performance in terms of accuracy (95.72%) and mean IoU score (91.21%) while with the smallest model size (3.93 M). The confirmed efficiency of UCTNet shows great potential for dual-flow architecture on snow and cloud classification. Full article
Show Figures

Figure 1

18 pages, 2984 KiB  
Article
IESRGAN: Enhanced U-Net Structured Generative Adversarial Network for Remote Sensing Image Super-Resolution Reconstruction
by Xiaohan Yue, Danfeng Liu, Liguo Wang, Jón Atli Benediktsson, Linghong Meng and Lei Deng
Remote Sens. 2023, 15(14), 3490; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15143490 - 11 Jul 2023
Cited by 1 | Viewed by 1580
Abstract
With the continuous development of modern remote sensing satellite technology, high-resolution (HR) remote sensing image data have gradually become widely used. However, due to the vastness of areas that need to be monitored and the difficulty in obtaining HR images, most monitoring projects [...] Read more.
With the continuous development of modern remote sensing satellite technology, high-resolution (HR) remote sensing image data have gradually become widely used. However, due to the vastness of areas that need to be monitored and the difficulty in obtaining HR images, most monitoring projects still rely on low-resolution (LR) data for the regions being monitored. The emergence of remote sensing image super-resolution (SR) reconstruction technology effectively compensates for the lack of original HR images. This paper proposes an Improved Enhanced Super-Resolution Generative Adversarial Network (IESRGAN) based on an enhanced U-Net structure for a 4× scale detail reconstruction of LR images using NaSC-TG2 remote sensing images. In this method, in-depth research has been performed and consequent improvements have been made to the generator and discriminator within the GAN network. Specifically, before introducing Residual-in-Residual Dense Blocks (RRDB), in the proposed method, input images are subjected to reflective padding to enhance edge information. Meanwhile, a U-Net structure is adopted for the discriminator, incorporating spectral normalization to focus on semantic and structural changes between real and fake images, thereby improving generated image quality and GAN performance. To evaluate the effectiveness and generalization ability of our proposed model, experiments were conducted on multiple real-world remote sensing image datasets. Experimental results demonstrate that IESRGAN exhibits strong generalization capabilities while delivering outstanding performance in terms of PSNR, SSIM, and LPIPS image evaluation metrics. Full article
Show Figures

Figure 1

19 pages, 6805 KiB  
Article
Applying Artificial Cover to Reduce Melting in Dagu Glacier in the Eastern Qinghai-Tibetan Plateau
by Yida Xie, Feiteng Wang, Chunhai Xu, Xiaoying Yue and Shujing Yang
Remote Sens. 2023, 15(7), 1755; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15071755 - 24 Mar 2023
Cited by 1 | Viewed by 1543
Abstract
Global warming has accelerated during the past decades, causing a dramatic shrinking of glaciers across the globe. So far, the attempts to counterbalance glacial melt have proven to be inadequate and are mostly limited to a few glacial landscapes only. In the present [...] Read more.
Global warming has accelerated during the past decades, causing a dramatic shrinking of glaciers across the globe. So far, the attempts to counterbalance glacial melt have proven to be inadequate and are mostly limited to a few glacial landscapes only. In the present study, a scientific glacier protection experiment was conducted at the Dagu Glacier site. Specifically, the study site was the Dagu Glacier No. 17, situated 4830 m a.s.l. The study involved a deliberate verification of the feasibility and effectiveness of using geotextile covers on small glaciers located at high altitudes between August 2020 and October 2021. The observations revealed that the mass loss in the area covered with geotextiles was, on average, 15% lower (per year) compared to that in the uncovered areas combining field campaigns, terrestrial laser scanning, and unmanned aerial vehicle. The reason for this could be that the albedo of the geotextile is higher than that of the glacier surface. In addition, the aging of geotextiles causes a decline in their albedo, leading to a gradual decline in the effectiveness of the resulting glacier protection. It was indicated that geotextiles could be effective in facilitating the mitigation of glacier ablation, although the cost-related limitations render it difficult to upscale the use of artificial cover. Nonetheless, using active artificial cover could be effective in the case of small glaciers, glacier landscapes, and glacier terminus regions. Full article
Show Figures

Graphical abstract

24 pages, 5584 KiB  
Article
BPG-Based Lossy Compression of Three-Channel Noisy Images with Prediction of Optimal Operation Existence and Its Parameters
by Bogdan Kovalenko, Vladimir Lukin and Benoit Vozel
Remote Sens. 2023, 15(6), 1669; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15061669 - 20 Mar 2023
Cited by 1 | Viewed by 1096
Abstract
Nowadays, there is a clear trend toward increasing the number of remote-sensing images acquired and their average size. This leads to the need to compress the images for storage, dissemination, and transfer over communication lines where lossy compression techniques are more popular. The [...] Read more.
Nowadays, there is a clear trend toward increasing the number of remote-sensing images acquired and their average size. This leads to the need to compress the images for storage, dissemination, and transfer over communication lines where lossy compression techniques are more popular. The images to be compressed or some of their components are often noisy. They must therefore be compressed taking into account the properties of the noise. Due to the noise filtering effect obtained during lossy compression of noisy images, an optimal operating point (OOP) may exist. The OOP is a parameter that controls the compression for which the quality of the compressed image is closer (closest) to the corresponding noise-free image than the quality of the noisy (original, uncompressed) image according to some quantitative criterion (metric). In practice, it is important to know whether the OOP exists for a given image, because if the OOP exists, it is appropriate to perform the compression in the OOP or at least in its neighborhood. Since the real image is absent in practice, it is impossible to determine a priori whether the OOP exists or not. Here, we focus on three-channel-remote-sensing images and show that it is possible to easily predict the existence of the OOP. Furthermore, it is possible to predict the metric values or their improvements with appropriate accuracy for practical use. The BPG (better portable graphics) encoder is considered a special case of an efficient compression technique. As an initial design step, the case of additive white Gaussian noise with equal variance in the three components is considered. While previous research was mainly focused on predicting the improvement (reduction) of the PSNR and PSNR-HVS-M metrics, here we focus on the modern visual quality metrics, namely PSNR-HA and MDSI. We also discuss what to do if, according to the prediction, an OOP is absent. Examples of lossy compression of noisy three-channel remote sensing images are given. It is also shown that the use of three-dimensional compression provides a compression ratio increase by several times compared with component-wise compression in the OOP. Full article
Show Figures

Graphical abstract

19 pages, 6851 KiB  
Article
HRRNet: Hierarchical Refinement Residual Network for Semantic Segmentation of Remote Sensing Images
by Shiwei Cheng, Baozhu Li, Le Sun and Yuwen Chen
Remote Sens. 2023, 15(5), 1244; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15051244 - 23 Feb 2023
Cited by 4 | Viewed by 1784
Abstract
Semantic segmentation of high-resolution remote sensing images plays an important role in many practical applications, including precision agriculture and natural disaster assessment. With the emergence of a large number of studies on convolutional neural networks, the performance of the semantic segmentation model of [...] Read more.
Semantic segmentation of high-resolution remote sensing images plays an important role in many practical applications, including precision agriculture and natural disaster assessment. With the emergence of a large number of studies on convolutional neural networks, the performance of the semantic segmentation model of remote sensing images has been dramatically promoted. However, many deep convolutional network models do not fully refine the segmentation result maps, and, in addition, the contextual dependencies of the semantic feature map have not been adequately exploited. This article proposes a hierarchical refinement residual network (HRRNet) to address these issues. The HRRNet mainly consists of ResNet50 as the backbone, attention blocks, and decoders. The attention block consists of a channel attention module (CAM) and a pooling residual attention module (PRAM) and residual structures. Specifically, the feature map output by the four blocks of Resnet50 is passed through the attention block to fully explore the contextual dependencies of the position and channel of the semantic feature map, and, then, the feature maps of each branch are fused step by step to realize the refinement of the feature maps, thereby improving the segmentation performance of the proposed HRRNet. Experiments show that the proposed HRRNet improves segmentation result maps compared with various state-of-the-art networks on Vaihingen and Potsdam datasets. Full article
Show Figures

Figure 1

27 pages, 20088 KiB  
Article
Automatic Relative Radiometric Normalization of Bi-Temporal Satellite Images Using a Coarse-to-Fine Pseudo-Invariant Features Selection and Fuzzy Integral Fusion Strategies
by Armin Moghimi, Ali Mohammadzadeh, Turgay Celik, Brian Brisco and Meisam Amani
Remote Sens. 2022, 14(8), 1777; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14081777 - 07 Apr 2022
Cited by 8 | Viewed by 2409 | Correction
Abstract
Relative radiometric normalization (RRN) is important for pre-processing and analyzing multitemporal remote sensing (RS) images. Multitemporal RS images usually include different land use/land cover (LULC) types; therefore, considering an identical linear relationship during RRN modeling may result in potential errors in the RRN [...] Read more.
Relative radiometric normalization (RRN) is important for pre-processing and analyzing multitemporal remote sensing (RS) images. Multitemporal RS images usually include different land use/land cover (LULC) types; therefore, considering an identical linear relationship during RRN modeling may result in potential errors in the RRN results. To resolve this issue, we proposed a new automatic RRN technique that efficiently selects the clustered pseudo-invariant features (PIFs) through a coarse-to-fine strategy and uses them in a fusion-based RRN modeling approach. In the coarse stage, an efficient difference index was first generated from the down-sampled reference and target images by combining the spectral correlation, spectral angle mapper (SAM), and Chebyshev distance. This index was then categorized into three groups of changed, unchanged, and uncertain classes using a fast multiple thresholding technique. In the fine stage, the subject image was first segmented into different clusters by the histogram-based fuzzy c-means (HFCM) algorithm. The optimal PIFs were then selected from unchanged and uncertain regions using each cluster’s bivariate joint distribution analysis. In the RRN modeling step, two normalized subject images were first produced using the robust linear regression (RLR) and cluster-wise-RLR (CRLR) methods based on the clustered PIFs. Finally, the normalized images were fused using the Choquet fuzzy integral fusion strategy for overwhelming the discontinuity between clusters in the final results and keeping the radiometric rectification optimal. Several experiments were implemented on four different bi-temporal satellite images and a simulated dataset to demonstrate the efficiency of the proposed method. The results showed that the proposed method yielded superior RRN results and outperformed other considered well-known RRN algorithms in terms of both accuracy level and execution time. Full article
Show Figures

Figure 1

35 pages, 9616 KiB  
Article
Discrete Atomic Transform-Based Lossy Compression of Three-Channel Remote Sensing Images with Quality Control
by Victor Makarichev, Irina Vasilyeva, Vladimir Lukin, Benoit Vozel, Andrii Shelestov and Nataliia Kussul
Remote Sens. 2022, 14(1), 125; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14010125 - 28 Dec 2021
Cited by 15 | Viewed by 1997
Abstract
Lossy compression of remote sensing data has found numerous applications. Several requirements are usually imposed on methods and algorithms to be used. A large compression ratio has to be provided, introduced distortions should not lead to sufficient reduction of classification accuracy, compression has [...] Read more.
Lossy compression of remote sensing data has found numerous applications. Several requirements are usually imposed on methods and algorithms to be used. A large compression ratio has to be provided, introduced distortions should not lead to sufficient reduction of classification accuracy, compression has to be realized quickly enough, etc. An additional requirement could be to provide privacy of compressed data. In this paper, we show that these requirements can be easily and effectively realized by compression based on discrete atomic transform (DAT). Three-channel remote sensing (RS) images that are part of multispectral data are used as examples. It is demonstrated that the quality of images compressed by DAT can be varied and controlled by setting maximal absolute deviation. This parameter also strictly relates to more traditional metrics as root mean square error (RMSE) and peak signal-to-noise ratio (PSNR) that can be controlled. It is also shown that there are several variants of DAT having different depths. Their performances are compared from different viewpoints, and the recommendations of transform depth are given. Effects of lossy compression on three-channel image classification using the maximum likelihood (ML) approach are studied. It is shown that the total probability of correct classification remains almost the same for a wide range of distortions introduced by lossy compression, although some variations of correct classification probabilities take place for particular classes depending on peculiarities of feature distributions. Experiments are carried out for multispectral Sentinel images of different complexities. Full article
Show Figures

Graphical abstract

23 pages, 27009 KiB  
Article
A New Multispectral Data Augmentation Technique Based on Data Imputation
by Álvaro Acción, Francisco Argüello and Dora B. Heras
Remote Sens. 2021, 13(23), 4875; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13234875 - 30 Nov 2021
Cited by 3 | Viewed by 2634
Abstract
Deep Learning (DL) has been recently introduced into the hyperspectral and multispectral image classification landscape. Despite the success of DL in the remote sensing field, DL models are computationally intensive due to the large number of parameters they need to learn. The high [...] Read more.
Deep Learning (DL) has been recently introduced into the hyperspectral and multispectral image classification landscape. Despite the success of DL in the remote sensing field, DL models are computationally intensive due to the large number of parameters they need to learn. The high density of information present in remote sensing imagery with high spectral resolution can make the application of DL models to large scenes challenging. Methods such as patch-based classification require large amounts of data to be processed during the training and prediction stages, which translates into long processing times and high energy consumption. One of the solutions to decrease the computational cost of these models is to perform segment-based classification. Segment-based classification schemes can significantly decrease training and prediction times, and also offer advantages over simply reducing the size of the training datasets by randomly sampling training data. The lack of a large enough number of samples can, however, pose an additional challenge, causing these models to not generalize properly. Data augmentation methods are used to generate new synthetic samples based on existing data to increase the classification performance. In this work, we propose a new data augmentation scheme using data imputation and matrix completion methods for segment-based classification. The proposal has been validated using two high-resolution multispectral datasets from the literature. The results obtained show that the proposed approach successfully increases the classification performance across all the scenes tested and that data imputation methods applied to multispectral imagery are a valid means to perform data augmentation. A comparison of classification accuracy between different imputation methods applied to the proposed scheme was also carried out. Full article
Show Figures

Figure 1

Back to TopTop