remotesensing-logo

Journal Browser

Journal Browser

Reinforcement Learning Algorithm in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (10 July 2023) | Viewed by 9698

Special Issue Editors

School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
Interests: image processing; object recognition; remote sensing image analys

Special Issue Information

Dear Colleagues,

As an important technique in deep learning, reinforcement learning was first proposed in the computer vision domain in the last decade. In recent years, deep reinforcement learning has continued to gain applications (e.g., image classification, object detection, object tracking, and so on) in the remote sensing community. To systematically promote the reinforcement of learning-driven remote sensing applications, this Special Issue aims to collect research on achievements concerning the remote sensing image interpretation methods based on reinforcement learning.

This Special Issue aims to collect and discuss the various applications of reinforcement learning in the field of remote sensing. The Special Issue may cover but is not limited to the following topics: reinforcement learning-based network architecture searching; reinforcement learning-based remote sensing image understanding, and so forth.

Dr. Yansheng Li
Dr. Yihua Tan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • reinforcement learning theory
  • network architecture searching via reinforcement learning
  • reinforcement learning-driven remote sensing image retrieval
  • reinforcement learning-driven remote sensing image classification
  • reinforcement learning-driven remote sensing image object detection
  • reinforcement learning-driven remote sensing image object tracking
  • reinforcement learning-driven remote sensing image captioning
  • hyperspectral image compression via reinforcement learning
  • large-size remote sensing semantic segmentation via reinforcement learning

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 4892 KiB  
Article
DRL-Based Dynamic Destroy Approaches for Agile-Satellite Mission Planning
by Wei Huang, Zongwang Li, Xiaohe He, Junyan Xiang, Xu Du and Xuwen Liang
Remote Sens. 2023, 15(18), 4503; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15184503 - 13 Sep 2023
Viewed by 851
Abstract
Agile-satellite mission planning is a crucial issue in the construction of satellite constellations. The large scale of remote sensing missions and the high complexity of constraints in agile-satellite mission planning pose challenges in the search for an optimal solution. To tackle the issue, [...] Read more.
Agile-satellite mission planning is a crucial issue in the construction of satellite constellations. The large scale of remote sensing missions and the high complexity of constraints in agile-satellite mission planning pose challenges in the search for an optimal solution. To tackle the issue, a dynamic destroy deep-reinforcement learning (D3RL) model is designed to facilitate subsequent optimization operations via adaptive destruction to the existing solutions. Specifically, we first perform a clustering and embedding operation to reconstruct tasks into a clustering graph, thereby improving data utilization. Secondly, the D3RL model is established based on graph attention networks (GATs) to enhance the search efficiency for optimal solutions. Moreover, we present two applications of the D3RL model for intensive scenes: the deep-reinforcement learning (DRL) method and the D3RL-based large-neighborhood search method (DRL-LNS). Experimental simulation results illustrate that the D3RL-based approaches outperform the competition in terms of solutions’ quality and computational efficiency, particularly in more challenging large-scale scenarios. DRL-LNS outperforms ALNS with an average scheduling rate improvement of approximately 11% in Area instances. In contrast, the DRL approach performs better in World scenarios, with an average scheduling rate that is around 8% higher than that of ALNS. Full article
(This article belongs to the Special Issue Reinforcement Learning Algorithm in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 9247 KiB  
Article
MD3: Model-Driven Deep Remotely Sensed Image Denoising
by Zhenghua Huang, Zifan Zhu, Yaozong Zhang, Zhicheng Wang, Biyun Xu, Jun Liu, Shaoyi Li and Hao Fang
Remote Sens. 2023, 15(2), 445; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15020445 - 11 Jan 2023
Cited by 1 | Viewed by 1713
Abstract
Remotely sensed images degraded by additive white Gaussian noise (AWGN) have low-level vision, resulting in a poor analysis of their contents. To reduce AWGN, two types of denoising strategies, sparse-coding-model-based and deep-neural-network-based (DNN), are commonly utilized, which have their respective merits and drawbacks. [...] Read more.
Remotely sensed images degraded by additive white Gaussian noise (AWGN) have low-level vision, resulting in a poor analysis of their contents. To reduce AWGN, two types of denoising strategies, sparse-coding-model-based and deep-neural-network-based (DNN), are commonly utilized, which have their respective merits and drawbacks. For example, the former pursue enjoyable performance with a high computational burden, while the latter have powerful capacity in completing a specified task efficiently, but this limits their application range. To combine their merits for improving performance efficiently, this paper proposes a model-driven deep denoising (MD3) scheme. To solve the MD3 model, we first decomposed it into several subproblems by the alternating direction method of multipliers (ADMM). Then, the denoising subproblems are replaced by different learnable denoisers, which are plugged into the unfolded MD3 model to efficiently produce a stable solution. Both quantitative and qualitative results validate that the proposed MD3 approach is effective and efficient, while it has a more powerful ability in generating enjoyable denoising performance and preserving rich textures than other advanced methods. Full article
(This article belongs to the Special Issue Reinforcement Learning Algorithm in Remote Sensing)
Show Figures

Figure 1

18 pages, 6669 KiB  
Article
Dark Spot Detection from SAR Images Based on Superpixel Deeper Graph Convolutional Network
by Xiaojian Liu, Yansheng Li, Xinyi Liu and Huimin Zou
Remote Sens. 2022, 14(21), 5618; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14215618 - 07 Nov 2022
Cited by 4 | Viewed by 1948
Abstract
Synthetic Aperture Radar (SAR) is the primary equipment used to detect oil slicks on the ocean’s surface. On SAR images, oil spill regions, as well as other places impacted by atmospheric and oceanic phenomena such as rain cells, upwellings, and internal waves, appear [...] Read more.
Synthetic Aperture Radar (SAR) is the primary equipment used to detect oil slicks on the ocean’s surface. On SAR images, oil spill regions, as well as other places impacted by atmospheric and oceanic phenomena such as rain cells, upwellings, and internal waves, appear as dark spots. Dark spot detection is typically the initial stage in the identification of oil spills. Because the identified dark spots are oil slick candidates, the quality of dark spot segmentation will eventually impact the accuracy of oil slick identification. Although certain sophisticated deep learning approaches employing pixels as primary processing units work well in remote sensing image semantic segmentation, finding some dark patches with weak boundaries and small regions from noisy SAR images remains a significant difficulty. In light of the foregoing, this paper proposes a dark spot detection method based on superpixels and deeper graph convolutional networks (SGDCNs), with superpixels serving as processing units. The contours of dark spots can be better detected after superpixel segmentation, and the noise in the SAR image can also be smoothed. Furthermore, features derived from superpixel regions are more robust than those derived from fixed pixel neighborhoods. Using the support vector machine recursive feature elimination (SVM-RFE) feature selection algorithm, we obtain an excellent subset of superpixel features for segmentation to reduce the learning task difficulty. After that, the SAR images are transformed into graphs with superpixels as nodes, which are fed into the deeper graph convolutional neural network for node classification. SGDCN leverages a differentiable aggregation function to aggregate the node and neighbor features to form more advanced features. To validate our method, we manually annotated six typical large-scale SAR images covering the Baltic Sea and constructed a dark spot detection dataset. The experimental results demonstrate that our proposed SGDCN is robust and effective compared with several competitive baselines. This dataset has been made publicly available along with this paper. Full article
(This article belongs to the Special Issue Reinforcement Learning Algorithm in Remote Sensing)
Show Figures

Figure 1

17 pages, 4704 KiB  
Article
DNAS: Decoupling Neural Architecture Search for High-Resolution Remote Sensing Image Semantic Segmentation
by Yu Wang, Yansheng Li, Wei Chen, Yunzhou Li and Bo Dang
Remote Sens. 2022, 14(16), 3864; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14163864 - 09 Aug 2022
Cited by 6 | Viewed by 4048
Abstract
Deep learning methods, especially deep convolutional neural networks (DCNNs), have been widely used in high-resolution remote sensing image (HRSI) semantic segmentation. In literature, most successful DCNNs are artificially designed through a large number of experiments, which often consume lots of time and depend [...] Read more.
Deep learning methods, especially deep convolutional neural networks (DCNNs), have been widely used in high-resolution remote sensing image (HRSI) semantic segmentation. In literature, most successful DCNNs are artificially designed through a large number of experiments, which often consume lots of time and depend on rich domain knowledge. Recently, neural architecture search (NAS), as a direction for automatically designing network architectures, has achieved great success in different kinds of computer vision tasks. For HRSI semantic segmentation, NAS faces two major challenges: (1) The task’s high complexity degree, which is caused by the pixel-by-pixel prediction demand in semantic segmentation, leads to a rapid expansion of the search space; (2) HRSI semantic segmentation often needs to exploit long-range dependency (i.e., a large spatial context), which means the NAS technique requires a lot of display memory in the optimization process and can be tough to converge. With the aforementioned considerations in mind, we propose a new decoupling NAS (DNAS) framework to automatically design the network architecture for HRSI semantic segmentation. In DNAS, a hierarchical search space with three levels is recommended: path-level, connection-level, and cell-level. To adapt to this hierarchical search space, we devised a new decoupling search optimization strategy to decrease the memory occupation. More specifically, the search optimization strategy consists of three stages: (1) a light super-net (i.e., the specific search space) in the path-level space is trained to get the optimal path coding; (2) we endowed the optimal path with various cross-layer connections and it is trained to obtain the connection coding; (3) the super-net, which is initialized by path coding and connection coding, is populated with kinds of concrete cell operators and the optimal cell operators are finally determined. It is worth noting that the well-designed search space can cover various network candidates and the optimization process can be done efficiently. Extensive experiments on the publicly open GID and FU datasets showed that our DNAS outperformed the state-of-the-art methods, including artificial networks and NAS methods. Full article
(This article belongs to the Special Issue Reinforcement Learning Algorithm in Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop