remotesensing-logo

Journal Browser

Journal Browser

Pattern Recognition and Image Processing for Remote Sensing III

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (5 January 2024) | Viewed by 5391

Special Issue Editors


E-Mail Website
Guest Editor
School of Astronautics, Beihang University, Beijing 102206, China
Interests: computer vision and related applications in remote sensing; self-driving; video games
Special Issues, Collections and Topics in MDPI journals
School of Computer Science, Nankai University, Tianjin 300350, China
Interests: hyperspectral unmixing; remote sensing image processing; multi-objective optimization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This is the third volume of the Special Issue “Pattern Recognition and Image Processing for Remote Sensing”, which has been a great success.

Remote sensing provides a global perspective and a wealth of data about earth systems, allowing us to visualize and analyze objects and features on the earth’s surface. Today, pattern recognition and image processing technologies are revolutionizing earth observation and presenting unprecedented opportunities and challenges. Despite recent progress, there are still some open problems and challenges, such as deep learning with multi-modal and multi-resolution remote sensing images, light-weight processing for large-scale data, domain adaptation, and data fusion.

To answer these questions, this Special Issue focuses on presenting the latest advances in pattern recognition and image processing. We invite you to submit papers with methodological contributions and innovative applications. All types of image modalities are encouraged, such as multispectral imaging, hyperspectral imaging, synthetic aperture radar (SAR), multi-temporal imaging, LIDAR, etc. The platform is also unrestricted, and sensing can be carried using drones, aircraft, satellites, robots, etc. Any other applications related to remote sensing are welcome. The potential topics may include, but are not limited to, the following:

  • Pattern recognition and machine learning;
  • Deep learning;
  • Image classification, object detection, and image segmentation;
  • Change detection;
  • Image synthesis;
  • Multi-modal data fusion from different sensors;
  • Image quality improvement;
  • Real-time processing of remote sensing data;
  • Unsupervised learning and self-supervised learning;
  • Advanced deep learning techniques (e.g., generative adversarial networks, diffusion probabilistic models, and physics-informed neural networks);
  • Applications of remote sensing image in agriculture, marine, meteorology, and other fields.

Dr. Zhengxia Zou
Dr. Bin Pan
Dr. Xia Xu
Dr. Zhou Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • pattern recognition
  • image processing
  • machine learning
  • deep learning

Related Special Issue

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 3558 KiB  
Article
SCCMDet: Adaptive Sparse Convolutional Networks Based on Class Maps for Real-Time Onboard Detection in Unmanned Aerial Vehicle Remote Sensing Images
by Qifan Tan, Xuqi Yang, Cheng Qiu, Yanhuan Jiang, Jinze He, Jingshuo Liu and Yahui Wu
Remote Sens. 2024, 16(6), 1031; https://0-doi-org.brum.beds.ac.uk/10.3390/rs16061031 - 14 Mar 2024
Viewed by 680
Abstract
Onboard, real-time object detection in unmaned aerial vehicle remote sensing (UAV-RS) has always been a prominent challenge due to the higher image resolution required and the limited computing resources available. Due to the trade-off between accuracy and efficiency, the advantages of UAV-RS are [...] Read more.
Onboard, real-time object detection in unmaned aerial vehicle remote sensing (UAV-RS) has always been a prominent challenge due to the higher image resolution required and the limited computing resources available. Due to the trade-off between accuracy and efficiency, the advantages of UAV-RS are difficult to fully exploit. Current sparse-convolution-based detectors only convolve some of the meaningful features in order to accelerate the inference speed. However, the best approach to the selection of meaningful features, which ultimately determines the performance, is an open question. This study proposes the use of adaptive sparse convolutional networks based on class maps for real-time onboard detection in UAV-RS images (SCCMDet) to solve this problem. For data pre-processing, SCCMDet obtains the real class maps as labels from the ground truth to supervise the feature selection process. In addition, a generate class map network (GCMN), equipped with a newly designed loss function, identifies the importance of features to generate a binary class map which filters the image for its more meaningful sparse features. Comparative experiments were conducted on the VisDrone dataset, and the experimental results show that our method accelerates YOLOv8 by 41.94% at most and increases the performance by 2.52%. Moreover, ablation experiments demonstrate the effectiveness of the proposed model. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing III)
Show Figures

Graphical abstract

29 pages, 11740 KiB  
Article
Learning Point Processes and Convolutional Neural Networks for Object Detection in Satellite Images
by Jules Mabon, Mathias Ortner and Josiane Zerubia
Remote Sens. 2024, 16(6), 1019; https://0-doi-org.brum.beds.ac.uk/10.3390/rs16061019 - 13 Mar 2024
Viewed by 536
Abstract
Convolutional neural networks (CNN) have shown great results for object-detection tasks by learning texture and pattern-extraction filters. However, object-level interactions are harder to grasp without increasing the complexity of the architectures. On the other hand, Point Process models propose to solve the detection [...] Read more.
Convolutional neural networks (CNN) have shown great results for object-detection tasks by learning texture and pattern-extraction filters. However, object-level interactions are harder to grasp without increasing the complexity of the architectures. On the other hand, Point Process models propose to solve the detection of the configuration of objects as a whole, allowing the factoring in of the image data and the objects’ prior interactions. In this paper, we propose combining the information extracted by a CNN with priors on objects within a Markov Marked Point Process framework. We also propose a method to learn the parameters of this Energy-Based Model. We apply this model to the detection of small vehicles in optical satellite imagery, where the image information needs to be complemented with object interaction priors because of noise and small object sizes. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing III)
Show Figures

Graphical abstract

25 pages, 63267 KiB  
Article
WBIM-GAN: A Generative Adversarial Network Based Wideband Interference Mitigation Model for Synthetic Aperture Radar
by Xiaoyu Xu, Weiwei Fan, Siyao Wang and Feng Zhou
Remote Sens. 2024, 16(5), 910; https://0-doi-org.brum.beds.ac.uk/10.3390/rs16050910 - 04 Mar 2024
Viewed by 509
Abstract
Wideband interference (WBI) can significantly reduce the image quality and interpretation accuracy of synthetic aperture radar (SAR). To eliminate the negative effects of WBI on SAR, we propose a novel end-to-end data-driven approach to mitigate WBI. Specifically, the WBI is mitigated by an [...] Read more.
Wideband interference (WBI) can significantly reduce the image quality and interpretation accuracy of synthetic aperture radar (SAR). To eliminate the negative effects of WBI on SAR, we propose a novel end-to-end data-driven approach to mitigate WBI. Specifically, the WBI is mitigated by an explicit function called WBI mitigation–generative adversarial network (WBIM-GAN), mapping from an input WBI-corrupted echo to its properly WBI-free echo. WBIM-GAN comprises a WBI mitigation network and a target echo discriminative network. The WBI mitigation network incorporates a deep residual network to enhance the performance of WBI mitigation while addressing the issue of gradient saturation in the deeper layers. Simultaneously, the class activation mapping technique fully demonstrates that the WBI mitigation network can localize the WBI region rather than the target echo. By utilizing the PatchGAN architecture, the target echo discriminative network can capture the local texture and statistical features of target echoes, thus improving the effectiveness of WBI mitigation. Before applying the WBIM-GAN, the short-time Fourier transform (STFT) converts SAR echoes into a time–frequency domain (TFD) to better characterize WBI features. Finally, by comparing different WBI mitigation methods applied to several real measured SAR data collected by the Sentinel-1 system, the efficiency and superiority of WBIM-GAN are proved sufficiently. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing III)
Show Figures

Figure 1

20 pages, 70021 KiB  
Article
SSGAM-Net: A Hybrid Semi-Supervised and Supervised Network for Robust Semantic Segmentation Based on Drone LiDAR Data
by Hua Wu, Zhe Huang, Wanhao Zheng, Xiaojing Bai, Li Sun and Mengyang Pu
Remote Sens. 2024, 16(1), 92; https://0-doi-org.brum.beds.ac.uk/10.3390/rs16010092 - 25 Dec 2023
Viewed by 1110
Abstract
The semantic segmentation of drone LiDAR data is important in intelligent industrial operation and maintenance. However, current methods are not effective in directly processing airborne true-color point clouds that contain geometric and color noise. To overcome this challenge, we propose a novel hybrid [...] Read more.
The semantic segmentation of drone LiDAR data is important in intelligent industrial operation and maintenance. However, current methods are not effective in directly processing airborne true-color point clouds that contain geometric and color noise. To overcome this challenge, we propose a novel hybrid learning framework, named SSGAM-Net, which combines supervised and semi-supervised modules for segmenting objects from airborne noisy point clouds. To the best of our knowledge, we are the first to build a true-color industrial point cloud dataset, which is obtained by drones and covers 90,000 m2. Secondly, we propose a plug-and-play module, named the Global Adjacency Matrix (GAM), which utilizes only few labeled data to generate the pseudo-labels and guide the network to learn spatial relationships between objects in semi-supervised settings. Finally, we build our point cloud semantic segmentation network, SSGAM-Net, which combines a semi-supervised GAM module and a supervised Encoder–Decoder module. To evaluate the performance of our proposed method, we conduct experiments to compare our SSGAM-Net with existing advanced methods on our expert-labeled dataset. The experimental results show that our SSGAM-Net outperforms the current advanced methods, reaching 85.3% in mIoU, which ranges from 4.2 to 58.0% higher than other methods, achieving a competitive level. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing III)
Show Figures

Figure 1

17 pages, 4124 KiB  
Article
A Generative Adversarial Network with Spatial Attention Mechanism for Building Structure Inference Based on Unmanned Aerial Vehicle Remote Sensing Images
by Hao Chen, Zhixiang Guo, Xing Meng and Fachuan He
Remote Sens. 2023, 15(18), 4390; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15184390 - 06 Sep 2023
Viewed by 840
Abstract
The acquisition of building structures has broad applications across various fields. However, existing methods for inferring building structures predominantly depend on manual expertise, lacking sufficient automation. To tackle this challenge, we propose a building structure inference network that utilizes UAV remote sensing images, [...] Read more.
The acquisition of building structures has broad applications across various fields. However, existing methods for inferring building structures predominantly depend on manual expertise, lacking sufficient automation. To tackle this challenge, we propose a building structure inference network that utilizes UAV remote sensing images, with the PIX2PIX network serving as the foundational framework. We enhance the generator by incorporating an additive attention module that performs multi-scale feature fusion, enabling the combination of features from diverse spatial resolutions of the feature map. This modification enhances the model’s capability to emphasize global relationships during the mapping process. To ensure the completeness of line elements in the generator’s output, we design a novel loss function based on the Hough transform. A line penalty term is introduced that transforms the output of the generator and ground truth to the Hough domain due to the original loss function’s inability to effectively constrain the completeness of straight-line elements in the generated results in the spatial domain. A dataset of the appearance features obtained from UAV remote sensing images and the internal floor plan structure is made. Using UAV remote sensing images of multi-story residential buildings, high-rise residential buildings, and office buildings as test collections, the experimental results show that our method has better performance in inferring a room’s layout and the locations of load-bearing columns, achieving an average improvement of 11.2% and 21.1% over PIX2PIX in terms of the IoU and RMSE, respectively. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing III)
Show Figures

Figure 1

16 pages, 46124 KiB  
Article
Lie Group Equivariant Convolutional Neural Network Based on Laplace Distribution
by Dengfeng Liao and Guangzhong Liu
Remote Sens. 2023, 15(15), 3758; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15153758 - 28 Jul 2023
Cited by 1 | Viewed by 1070
Abstract
Traditional convolutional neural networks (CNNs) lack equivariance for transformations such as rotation and scaling. Consequently, they typically exhibit weak robustness when an input image undergoes generic transformations. Moreover, the complex model structure complicates the interpretation of learned low- and mid-level features. To address [...] Read more.
Traditional convolutional neural networks (CNNs) lack equivariance for transformations such as rotation and scaling. Consequently, they typically exhibit weak robustness when an input image undergoes generic transformations. Moreover, the complex model structure complicates the interpretation of learned low- and mid-level features. To address these issues, we introduce a Lie group equivariant convolutional neural network predicated on the Laplace distribution. This model’s Lie group characteristics blend multiple mid- and low-level features in image representation, unveiling the Lie group geometry and spatial structure of the Laplace distribution function space. It efficiently computes and resists noise while capturing pertinent information between image regions and features. Additionally, it refines and formulates an equivariant convolutional network appropriate for the Lie group feature map, maximizing the utilization of the equivariant feature at each level and boosting data efficiency. Experimental validation of our methodology using three remote sensing datasets confirms its feasibility and superiority. By ensuring a high accuracy rate, it enhances data utility and interpretability, proving to be an innovative and effective approach. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing III)
Show Figures

Graphical abstract

Back to TopTop