Remote Sensing Image Processing and Application

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Earth Sciences".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 13919

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science, China University of Geosciences, Wuhan 430074, China
Interests: satellite image analysis; satellite image processing; earth observation; geology; remote sensing; classification; feature selection; mapping; geospatial science; deep learning
Special Issues, Collections and Topics in MDPI journals
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University, Wuhan 430079, China
Interests: remote sensing; land cover mapping; object detection; deep learning; disaster response
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Traffic and Transportation Engineering, Central South University, Changsha 410075, China
Interests: planning and scheduling; swarm intelligence; evolutionary computation and intelligent transportation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Today, it is easy to obtain remote sensing images from different types of sensors, such as hyperspectral, multispectral, LiDAR, etc. Remote sensing images (RSIs) are one of the fastest growing research areas because of their wide range of applications. 

As remote sensing technologies and methods continue to improve in recent decades, scientists have made great strides in the field of remote sensing image processing. Satellite, airborne, UAV, and terrestrial imaging techniques are constantly evolving in terms of data volume, quality, and variety. Remarkable efforts have been made to improve interpretation accuracy, subpixel-level classification, and many other aspects.

This Special Issue will be a collection of articles focusing on new insights, new developments, current challenges, and future prospects in the field of remote sensing image processing. It aims to present the latest advances in innovative image analysis and processing techniques and their contribution in a wide range of application areas, in an effort to predict where they will take the discipline and practice in the coming years.

Prof. Dr. Weitao Chen
Dr. Ailong Ma
Prof. Dr. Guohua Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Related Special Issue

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 1450 KiB  
Article
Detection of Ocean Internal Waves Based on Modified Deep Convolutional Generative Adversarial Network and WaveNet in Moderate Resolution Imaging Spectroradiometer Images
by Zhongyi Jiang, Xing Gao, Lin Shi, Ning Li and Ling Zou
Appl. Sci. 2023, 13(20), 11235; https://0-doi-org.brum.beds.ac.uk/10.3390/app132011235 - 12 Oct 2023
Viewed by 849
Abstract
The generation and propagation of internal waves in the ocean are a common phenomenon that plays a pivotal role in the transport of mass, momentum, and energy, as well as in global climate change. Internal waves serve as a critical component of oceanic [...] Read more.
The generation and propagation of internal waves in the ocean are a common phenomenon that plays a pivotal role in the transport of mass, momentum, and energy, as well as in global climate change. Internal waves serve as a critical component of oceanic processes, contributing to the redistribution of heat and nutrients in the ocean, which, in turn, has implications for global climate regulation. However, the automatic identification of internal waves in oceanic regions from remote sensing images has presented a significant challenge. In this research paper, we address this challenge by designing a data augmentation approach grounded in a modified deep convolutional generative adversarial network (DCGAN) to enrich MODIS remote sensing image data for the automated detection of internal waves in the ocean. Utilizing t-distributed stochastic neighbor embedding (t-SNE) technology, we demonstrate that the feature distribution of the images produced by the modified DCGAN closely resembles that of the original images. By using t-SNE dimensionality reduction technology to map high-dimensional remote sensing data into a two-dimensional space, we can better understand, visualize, and analyze the quality of data generated by the modified DCGAN. The images generated by the modified DCGAN not only expand the dataset’s size but also exhibit diverse characteristics, enhancing the model’s generalization performance. Furthermore, we have developed a deep neural network named “WaveNet,” which incorporates a channel-wise attention mechanism to effectively handle complex remote sensing images, resulting in high classification accuracy and robustness. It is important to note that this study has limitations, such as the reliance on specific remote sensing data sources and the need for further validation across various oceanic regions. These limitations are essential to consider in the broader context of oceanic research and remote sensing applications. We initially pre-train WaveNet using the EuroSAT remote sensing dataset and subsequently employ it to identify internal waves in MODIS remote sensing images. Experiments show the highest average recognition accuracy achieved is an impressive 98.625%. When compared to traditional data augmentation training sets, utilizing the training set generated by the modified DCGAN leads to a 5.437% enhancement in WaveNet’s recognition rate. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing and Application)
Show Figures

Figure 1

16 pages, 5904 KiB  
Article
Remote Sensing Multimodal Image Matching Based on Structure Feature and Learnable Matching Network
by Songlai Han, Xuesong Liu, Jing Dong and Haiqiao Liu
Appl. Sci. 2023, 13(13), 7701; https://0-doi-org.brum.beds.ac.uk/10.3390/app13137701 - 29 Jun 2023
Cited by 1 | Viewed by 995
Abstract
Matching remotely sensed multimodal images is a crucial process that poses significant challenges due to nonlinear radiometric differences and substantial image noise. To overcome these difficulties, this study presents a novel and practical template-matching algorithm specifically designed for this purpose. Unlike traditional approaches [...] Read more.
Matching remotely sensed multimodal images is a crucial process that poses significant challenges due to nonlinear radiometric differences and substantial image noise. To overcome these difficulties, this study presents a novel and practical template-matching algorithm specifically designed for this purpose. Unlike traditional approaches that rely on image intensity, the proposed algorithm focuses on matching multimodal images based on their geometric structure information. This approach enables the method to effectively adapt to variations in grayscale caused by radiometric differences. To enhance the matching performance, principal component analysis calculation based on the log-Gabor filter is proposed to estimate the structural feature of the image. The proposed method can estimate the structure feature accurately even under severe noise distortion. In addition, a learnable matching network is proposed for similarity measuring to adapt to the gradient reversal caused by the radiometric difference among remotely sensed multimodal images. Infrared, visible light, and synthetic aperture radar images are adopted for the evaluation, to verify the performance of the proposed algorithm. Based on the results, the proposed algorithm has a distinct advantage over other state-of-the-art template-matching algorithms. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing and Application)
Show Figures

Figure 1

18 pages, 3431 KiB  
Article
Multi-Scale Dynamic Analysis of the Russian–Ukrainian Conflict from the Perspective of Night-Time Lights
by Le-Lin Li, Peng Liang, San Jiang and Ze-Qiang Chen
Appl. Sci. 2022, 12(24), 12998; https://0-doi-org.brum.beds.ac.uk/10.3390/app122412998 - 18 Dec 2022
Cited by 4 | Viewed by 3074
Abstract
Under the influence of various forces, the conflict between Russia and Ukraine is violent and changeable. The obtaining of battlefield data by conventional means is difficult but necessary in order to ensure security, reliability, and comprehensiveness. The use of remote sensing technology can [...] Read more.
Under the influence of various forces, the conflict between Russia and Ukraine is violent and changeable. The obtaining of battlefield data by conventional means is difficult but necessary in order to ensure security, reliability, and comprehensiveness. The use of remote sensing technology can make up for the deficiencies of conventional methods. By using night-time light data, the total number of night-time lights in the built-up areas of Ukrainian cities within 36 days of the outbreak of the Russian–Ukrainian conflict is compiled in this paper. Furthermore, the dynamic changes in night-time light at the national, regional, and urban scales are analyzed by using the night-time light ratio index and the dynamic degree model combined with the time-series night-time light data. The results show that (1) after the outbreak of the war, more than 60% of the night-time lights in Ukrainian cities were lost. In terms of the night-time light recovery speed, the night-time lights in the pro-Russian areas recovered significantly faster, followed by Russian-controlled areas, and the recovery speed in areas of conflict was the lowest. (2) Decision-making by belligerents affects non-combatant activities and thus corresponds to light at night. The loss of night-time light will be reduced if military operations are reduced and mitigated if humanitarian operations are increased. (3) The changes in night-time light reflect the changes in the conflict situation well. When the conflict between Russia and Ukraine intensifies, the overall change of night-time light shows a downward trend. In this context, night-time light data can be used as an effective source to deduce and predict battlefield situations. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing and Application)
Show Figures

Figure 1

18 pages, 6091 KiB  
Article
Using HJ-1 CCD and MODIS Fusion Data to Invert HJ-1 NBAR for Time Series Analysis, a Case Study in the Mountain Valley of North China
by Huaiyuan Li, Zhiyuan Han and Heng Wang
Appl. Sci. 2022, 12(23), 12233; https://0-doi-org.brum.beds.ac.uk/10.3390/app122312233 - 29 Nov 2022
Viewed by 911
Abstract
HJ-1 charge-coupled device (CCD) data with high temporal and medium spatial resolution are widely used in environmental and disaster monitoring in China. However, due to bad weather, it is difficult to obtain sufficient time-continuous HJ-1 CCD data for environmental monitoring. In this study, [...] Read more.
HJ-1 charge-coupled device (CCD) data with high temporal and medium spatial resolution are widely used in environmental and disaster monitoring in China. However, due to bad weather, it is difficult to obtain sufficient time-continuous HJ-1 CCD data for environmental monitoring. In this study, the mountain valley with farmland and forestland in North China is selected as the experimental area, and HJ-1 CCD and moderate resolution imaging spectroradiometer (MODIS) data are used in the case study. An improved method of fusing data and inversing surface reflectivity is presented to obtain the HJ-1 inversion network-based application resolution (NBAR) data using linear matching of the Ross Thick-Li Sparse Reciprocal (RTLSR) model, and then predicted reflectivity using the seasonal autoregressive integrated moving average (SARIMA) model. The fusion data have advantages of high spatial and temporal resolution, as well as meeting the requirements of high quality and quantity of small-scale regional data. This case study provides a feasibility method for the HJ-1 satellites to produce the secondary products for small-scale remote sensing ground surface research. It also provides a reference for dynamic information acquisition and application of small satellite data, contributing to the improvement in RS estimation of surface environment variables. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing and Application)
Show Figures

Figure 1

19 pages, 2643 KiB  
Article
Selective Search Collaborative Representation for Hyperspectral Anomaly Detection
by Chensong Yin, Leitao Gao, Mingjie Wang and Anni Liu
Appl. Sci. 2022, 12(23), 12015; https://0-doi-org.brum.beds.ac.uk/10.3390/app122312015 - 24 Nov 2022
Viewed by 971
Abstract
As an important tool in hyperspectral anomaly detection, collaborative representation detection (CRD) has attracted significant attention in recent years. However, the lack of global feature utilization, the contamination of the background dictionary, and the dependence on the sizes of the dual-window lead to [...] Read more.
As an important tool in hyperspectral anomaly detection, collaborative representation detection (CRD) has attracted significant attention in recent years. However, the lack of global feature utilization, the contamination of the background dictionary, and the dependence on the sizes of the dual-window lead to instability of anomaly detection performance of CRD, making it difficult to apply in practice. To address these issues, a selective search collaborative representation detector is proposed. The selective search is based on global information and spectral similarity to realize the flexible fusion of adjacent homogeneous pixels. According to the homogeneous segmentation, the pixels with low background probability can be removed from the local background dictionary in CRD to achieve the purification of the local background and the improvement of detection performance, even under inappropriate dual-window sizes. Three real hyperspectral images are introduced to verify the feasibility and effectiveness of the proposed method. The detection performance is depicted by intuitive detection images, receiver operating characteristic curves, and area under curve values, as well as by running time. Comparison with CRD proves that the proposed method can effectively improve the anomaly detection accuracy of CRD and reduce the dependence of anomaly detection performance on the sizes of the dual-window. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing and Application)
Show Figures

Figure 1

17 pages, 6440 KiB  
Article
An Automatic Geometric Registration Method for Multi Temporal 3D Models
by Haixing Shang, Guanghong Ju, Guilin Li, Zufeng Li and Chaofeng Ren
Appl. Sci. 2022, 12(21), 11070; https://0-doi-org.brum.beds.ac.uk/10.3390/app122111070 - 01 Nov 2022
Viewed by 1210
Abstract
The application research of ground change detection based on multi-temporal 3D models is attracting more and more attention. However, the conventional methods of using UAV GPS-supported bundle adjustment or measuring ground control points before each data collection are not only economically costly, but [...] Read more.
The application research of ground change detection based on multi-temporal 3D models is attracting more and more attention. However, the conventional methods of using UAV GPS-supported bundle adjustment or measuring ground control points before each data collection are not only economically costly, but also have insufficient geometric accuracy. In this paper, an automatic geometric-registration method for multi-temporal 3D models is proposed. First, feature points are extracted from the highest resolution texture image of the 3D model, and their corresponding spatial location information is obtained based on the triangular mesh of the 3D model, which is then converted into 3D spatial-feature points. Second, the transformation model parameters of the 3D model to be registered relative to the base 3D model are estimated by the spatial-feature points with the outliers removed, and all the vertex positions of the model to be registered are updated to the coordinate system of the base 3D model. The experimental results show that the position measurement error of the ground object is less than 0.01 m for the multi-temporal 3D models obtained by the method of this paper. Since the method does not require the measurement of a large number of ground control points for each data acquisition, its application to long-period, high-precision ground monitoring projects has great economic and geometric accuracy advantages. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing and Application)
Show Figures

Figure 1

19 pages, 6597 KiB  
Article
Target Identification via Multi-View Multi-Task Joint Sparse Representation
by Jiawei Chen, Zhenshi Zhang and Xupeng Wen
Appl. Sci. 2022, 12(21), 10955; https://0-doi-org.brum.beds.ac.uk/10.3390/app122110955 - 28 Oct 2022
Cited by 1 | Viewed by 1143
Abstract
Recently, the monitoring efficiency and accuracy of visible and infrared video have been relatively low. In this paper, we propose an automatic target identification method using surveillance video, which provides an effective solution for the surveillance video data. Specifically, a target identification method [...] Read more.
Recently, the monitoring efficiency and accuracy of visible and infrared video have been relatively low. In this paper, we propose an automatic target identification method using surveillance video, which provides an effective solution for the surveillance video data. Specifically, a target identification method via multi-view and multi-task sparse learning is proposed, where multi-view includes various types of visual features such as textures, edges, and invariant features. Each view of a candidate is regarded as a template, and the potential relationship between different tasks and different views is considered. These multiple views are integrated into the multi-task spare learning framework. The proposed MVMT method can be applied to solve the ship’s identification. Extensive experiments are conducted on public datasets, and custom sequence frames (i.e., six sequence frames from ship videos). The experimental results show that the proposed method is superior to other classical methods, qualitatively and quantitatively. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing and Application)
Show Figures

Figure 1

20 pages, 16504 KiB  
Article
Multi-Input Attention Network for Dehazing of Remote Sensing Images
by Zhijie He, Cailan Gong, Yong Hu, Fuqiang Zheng and Lan Li
Appl. Sci. 2022, 12(20), 10523; https://0-doi-org.brum.beds.ac.uk/10.3390/app122010523 - 18 Oct 2022
Cited by 2 | Viewed by 1159
Abstract
The non-uniform haze distribution in remote sensing images, together with the complexity of the ground information, brings many difficulties to the dehazing of remote sensing images. In this paper, we propose a multi-input convolutional neural network based on an encoder–decoder structure to effectively [...] Read more.
The non-uniform haze distribution in remote sensing images, together with the complexity of the ground information, brings many difficulties to the dehazing of remote sensing images. In this paper, we propose a multi-input convolutional neural network based on an encoder–decoder structure to effectively restore remote sensing hazy images. The proposed network can directly learn the mapping between hazy images and the corresponding haze-free images. It also effectively utilizes the strong haze penetration characteristic of the Infrared band. Our proposed network also includes the attention module and the global skip connection structure, which enables the network to effectively learn the haze-relevant features and better preserve the ground information. We build a dataset for training and testing our proposed method. The dataset consists of remote sensing images with two different resolutions and nine bands, which are captured by Sentinel-2. The experimental results demonstrate that our method outperforms traditional dehazing methods and other deep learning methods in terms of the final dehazing effect, peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and feature similarity (FSIM). Full article
(This article belongs to the Special Issue Remote Sensing Image Processing and Application)
Show Figures

Figure 1

24 pages, 11609 KiB  
Article
YOLO-DSD: A YOLO-Based Detector Optimized for Better Balance between Accuracy, Deployability and Inference Time in Optical Remote Sensing Object Detection
by Hengxu Chen, Hong Jin and Shengping Lv
Appl. Sci. 2022, 12(15), 7622; https://0-doi-org.brum.beds.ac.uk/10.3390/app12157622 - 28 Jul 2022
Cited by 4 | Viewed by 2198
Abstract
Many deep learning (DL)-based detectors have been developed for optical remote sensing object detection in recent years. However, most of the recent detectors are developed toward the pursuit of a higher accuracy, but little toward a balance between accuracy, deployability and inference time, [...] Read more.
Many deep learning (DL)-based detectors have been developed for optical remote sensing object detection in recent years. However, most of the recent detectors are developed toward the pursuit of a higher accuracy, but little toward a balance between accuracy, deployability and inference time, which hinders the practical application for these detectors, especially in embedded devices. In order to achieve a higher detection accuracy and reduce the computational consumption and inference time simultaneously, a novel convolutional network named YOLO-DSD was developed based on YOLOv4. Firstly, a new feature extraction module, a dense residual (DenseRes) block, was proposed in a backbone network by utilizing a series-connected residual structure with the same topology for improving feature extraction while reducing the computational consumption and inference time. Secondly, convolution layer–batch normalization layer–leaky ReLu (CBL) ×5 modules in the neck, named S-CBL×5, were improved with a short-cut connection in order to mitigate feature loss. Finally, a low-cost novel attention mechanism called a dual channel attention (DCA) block was introduced to each S-CBL×5 for a better representation of features. The experimental results in the DIOR dataset indicate that YOLO-DSD outperforms YOLOv4 by increasing mAP0.5 from 71.3% to 73.0%, with a 23.9% and 29.7% reduction in Params and Flops, respectively, but a 50.2% improvement in FPS. In the RSOD dataset, the mAP0.5 of YOLO-DSD is increased from 90.0~94.0% to 92.6~95.5% under different input sizes. Compared with the SOTA detectors, YOLO-DSD achieves a better balance between the accuracy, deployability and inference time. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing and Application)
Show Figures

Figure 1

Back to TopTop