remotesensing-logo

Journal Browser

Journal Browser

Advanced Deep Learning Techniques for Earth Observation and Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 28957

Special Issue Editors


E-Mail Website
Guest Editor
Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China
Interests: remote sensing image processing and analysis; computer vision; pattern recognition; machine learning
Special Issues, Collections and Topics in MDPI journals
School of Statistics and Data Science, Nankai University, Tianjin 300071, China
Interests: hyperspectral image processing; remote sensing; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
HiWing Satellite Operation Division, The Third Institute of China Aerospace Science and Industry Corporation (CASIC), Beijing, China
Interests: hyperspectral image processing and pattern recognition

Special Issue Information

Dear Colleagues,

Satellite sensors are of great value to Earth observation by virtue of the advantages of high-frequency revisit, high spatial coverage, and relatively low price. In recent years, the rapid growth of deep learning techniques has significantly promoted the potential for developing advanced algorithms for various remote sensing applications, such as urban monitoring, land observation and sea surveillance. However, the increasing amount of remote sensing data puts forward higher requirements on learning algorithms. How to effectively and efficiently extract information from the massive remote sensing data to assist specific applications is a promising direction.

This Special Issue aims to exploit the advanced deep learning technology to further push forward the potential of geoscience information extraction from remote sensing data. Potential topics include, but are in no way limited to:

  • Supervised/self-supervised/semi-supervised learning for remote sensing data analysis
  • High-resolution remote sensing image processing based on deep learning
  • Efficient neural networks for remote sensing data processing
  • Image classification, semantic segmentation, target detection and change detection in remote sensing images
  • Adversarial learning for remote sensing image processing

Prof. Dr. Zhenwei Shi
Dr. Bin Pan
Dr. Shuo Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Earth observation
  • Deep learning
  • Image classification
  • Change detection
  • Target detection
  • Supervised learning

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 13570 KiB  
Article
Air-Ground Multi-Source Image Matching Based on High-Precision Reference Image
by Yongxian Zhang, Guorui Ma and Jiao Wu
Remote Sens. 2022, 14(3), 588; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14030588 - 26 Jan 2022
Cited by 6 | Viewed by 2407
Abstract
Robustness of aerial-ground multi-source image matching is closely related to the quality of the ground reference image. To explore the influence of reference images on the performance of air-ground multi-source image matching, we focused on the impact of the control point projection accuracy [...] Read more.
Robustness of aerial-ground multi-source image matching is closely related to the quality of the ground reference image. To explore the influence of reference images on the performance of air-ground multi-source image matching, we focused on the impact of the control point projection accuracy and tie point accuracy on bundle adjustment results for generating digital orthophoto images by using the Structure from Motion algorithm and Monte Carlo analysis. Additionally, we developed a method to learn local deep features in natural environments based on fine-tuning the pre-trained ResNet50 model and used the method to match multi-scale, multi-seasonal, and multi-viewpoint air-ground multi-source images. The results show that the proposed method could yield a relatively even distribution of feature corresponding points under different conditions, seasons, viewpoints, illuminations. Compared with state-of-the-art hand-crafted computer vision and deep learning matching methods, the proposed method demonstrated more efficient and robust matching performance that could be applied to a variety of unmanned aerial vehicle self- and target-positioning applications in GPS-denied areas. Full article
Show Figures

Figure 1

20 pages, 2858 KiB  
Article
S2Looking: A Satellite Side-Looking Dataset for Building Change Detection
by Li Shen, Yao Lu, Hao Chen, Hao Wei, Donghai Xie, Jiabao Yue, Rui Chen, Shouye Lv and Bitao Jiang
Remote Sens. 2021, 13(24), 5094; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13245094 - 15 Dec 2021
Cited by 61 | Viewed by 6776
Abstract
Building-change detection underpins many important applications, especially in the military and crisis-management domains. Recent methods used for change detection have shifted towards deep learning, which depends on the quality of its training data. The assembly of large-scale annotated satellite imagery datasets is therefore [...] Read more.
Building-change detection underpins many important applications, especially in the military and crisis-management domains. Recent methods used for change detection have shifted towards deep learning, which depends on the quality of its training data. The assembly of large-scale annotated satellite imagery datasets is therefore essential for global building-change surveillance. Existing datasets almost exclusively offer near-nadir viewing angles. This limits the range of changes that can be detected. By offering larger observation ranges, the scroll imaging mode of optical satellites presents an opportunity to overcome this restriction. This paper therefore introduces S2Looking, a building-change-detection dataset that contains large-scale side-looking satellite images captured at various off-nadir angles. The dataset consists of 5000 bitemporal image pairs of rural areas and more than 65,920 annotated instances of changes throughout the world. The dataset can be used to train deep-learning-based change-detection algorithms. It expands upon existing datasets by providing (1) larger viewing angles; (2) large illumination variances; and (3) the added complexity of rural images. To facilitate the use of the dataset, a benchmark task has been established, and preliminary tests suggest that deep-learning algorithms find the dataset significantly more challenging than the closest-competing near-nadir dataset, LEVIR-CD+. S2Looking may therefore promote important advances in existing building-change-detection algorithms. Full article
Show Figures

Figure 1

20 pages, 4143 KiB  
Article
A Residual Attention and Local Context-Aware Network for Road Extraction from High-Resolution Remote Sensing Imagery
by Ziwei Liu, Mingchang Wang, Fengyan Wang and Xue Ji
Remote Sens. 2021, 13(24), 4958; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13244958 - 07 Dec 2021
Cited by 8 | Viewed by 2492
Abstract
Extracting road information from high-resolution remote sensing images (HRI) can provide crucial geographic information for many applications. With the improvement of remote sensing image resolution, the image data contain more abundant feature information. However, this phenomenon also enhances the spatial heterogeneity between different [...] Read more.
Extracting road information from high-resolution remote sensing images (HRI) can provide crucial geographic information for many applications. With the improvement of remote sensing image resolution, the image data contain more abundant feature information. However, this phenomenon also enhances the spatial heterogeneity between different types of roads, making it difficult to accurately discern the road and non-road regions using only spectral characteristics. To remedy the above issues, a novel residual attention and local context-aware network (RALC-Net) is proposed for extracting a complete and continuous road network from HRI. RALC-Net utilizes a dual-encoder structure to improve the feature extraction capability of the network, whose two different branches take different feature information as input data. Specifically, we construct the residual attention module using the residual connection that can integrate spatial context information and the attention mechanism, highlighting local semantics to extract local feature information of roads. The residual attention module combines the characteristics of both the residual connection and the attention mechanism to retain complete road edge information, highlight essential semantics, and enhance the generalization capability of the network model. In addition, the multi-scale dilated convolution module is used to extract multi-scale spatial receptive fields to improve the model’s performance further. We perform experiments to verify the performance of each component of RALC-Net through the ablation study. By combining low-level features with high-level semantics, we extract road information and make comparisons with other state-of-the-art models. The experimental results show that the proposed RALC-Net has excellent feature representation ability and robust generalizability, and can extract complete road information from a complex environment. Full article
Show Figures

Figure 1

15 pages, 4866 KiB  
Article
Hyperspectral Target Detection with an Auxiliary Generative Adversarial Network
by Yanlong Gao, Yan Feng and Xumin Yu
Remote Sens. 2021, 13(21), 4454; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214454 - 05 Nov 2021
Cited by 14 | Viewed by 1812
Abstract
In recent years, the deep neural network has shown a strong presence in classification tasks and its effectiveness has been well proved. However, the framework of DNN usually requires a large number of samples. Compared to the training sets in classification tasks, the [...] Read more.
In recent years, the deep neural network has shown a strong presence in classification tasks and its effectiveness has been well proved. However, the framework of DNN usually requires a large number of samples. Compared to the training sets in classification tasks, the training sets for the target detection of hyperspectral images may only include a few target spectra which are quite limited and precious. The insufficient labeled samples make the DNN-based hyperspectral target detection task a challenging problem. To address this problem, we propose a hyperspectral target detection approach with an auxiliary generative adversarial network. Specifically, the training set is first expanded by generating simulated target spectra and background spectra using the generative adversarial network. Then, a classifier which is highly associated with the discriminator of the generative adversarial network is trained based on the real and the generated spectra. Finally, in order to further suppress the background, guided filters are utilized to improve the smoothness and robustness of the detection results. Experiments conducted on real hyperspectral images show the proposed approach is able to perform more efficiently and accurately compared to other target detection approaches. Full article
Show Figures

Graphical abstract

22 pages, 5395 KiB  
Article
Building Extraction from Remote Sensing Images with Sparse Token Transformers
by Keyan Chen, Zhengxia Zou and Zhenwei Shi
Remote Sens. 2021, 13(21), 4441; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214441 - 05 Nov 2021
Cited by 59 | Viewed by 7265
Abstract
Deep learning methods have achieved considerable progress in remote sensing image building extraction. Most building extraction methods are based on Convolutional Neural Networks (CNN). Recently, vision transformers have provided a better perspective for modeling long-range context in images, but usually suffer from high [...] Read more.
Deep learning methods have achieved considerable progress in remote sensing image building extraction. Most building extraction methods are based on Convolutional Neural Networks (CNN). Recently, vision transformers have provided a better perspective for modeling long-range context in images, but usually suffer from high computational complexity and memory usage. In this paper, we explored the potential of using transformers for efficient building extraction. We design an efficient dual-pathway transformer structure that learns the long-term dependency of tokens in both their spatial and channel dimensions and achieves state-of-the-art accuracy on benchmark building extraction datasets. Since single buildings in remote sensing images usually only occupy a very small part of the image pixels, we represent buildings as a set of “sparse” feature vectors in their feature space by introducing a new module called “sparse token sampler”. With such a design, the computational complexity in transformers can be greatly reduced over an order of magnitude. We refer to our method as Sparse Token Transformers (STT). Experiments conducted on the Wuhan University Aerial Building Dataset (WHU) and the Inria Aerial Image Labeling Dataset (INRIA) suggest the effectiveness and efficiency of our method. Compared with some widely used segmentation methods and some state-of-the-art building extraction methods, STT has achieved the best performance with low time cost. Full article
Show Figures

Graphical abstract

11 pages, 17433 KiB  
Communication
Acquisition of the Wide Swath Significant Wave Height from HY-2C through Deep Learning
by Jichao Wang, Ting Yu, Fangyu Deng, Zongli Ruan and Yongjun Jia
Remote Sens. 2021, 13(21), 4425; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214425 - 03 Nov 2021
Cited by 2 | Viewed by 1811
Abstract
Significant wave height (SWH) is of great importance in industries such as ocean engineering, marine resource development, shipping and transportation. Haiyang-2C (HY-2C), the second operational satellite in China’s ocean dynamics exploration series, can provide all-weather, all-day, global observations of wave height, wind, and [...] Read more.
Significant wave height (SWH) is of great importance in industries such as ocean engineering, marine resource development, shipping and transportation. Haiyang-2C (HY-2C), the second operational satellite in China’s ocean dynamics exploration series, can provide all-weather, all-day, global observations of wave height, wind, and temperature. An altimeter can only measure the nadir wave height and other information, and a scatterometer can obtain the wind field with a wide swath. In this paper, a deep learning approach is applied to produce wide swath SWH data through the wind field using a scatterometer and the nadir wave height taken from an altimeter. Two test sets, 1-month data at 6 min intervals and 1-day data with an interval of 10 s, are fed into the trained model. Experiments indicate that the extending nadir SWH yields using a real-time wide swath grid product along a track, which can support oceanographic study, is superior for taking the swell characteristics of ERA5 into account as the input of the wide swath SWH model. In conclusion, the results demonstrate the effectiveness and feasibility of the wide swath SWH model. Full article
Show Figures

Figure 1

23 pages, 7334 KiB  
Article
Temporally Generalizable Land Cover Classification: A Recurrent Convolutional Neural Network Unveils Major Coastal Change through Time
by Patrick Clifton Gray, Diego F. Chamorro, Justin T. Ridge, Hannah Rae Kerner, Emily A. Ury and David W. Johnston
Remote Sens. 2021, 13(19), 3953; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13193953 - 02 Oct 2021
Cited by 10 | Viewed by 2970
Abstract
The ability to accurately classify land cover in periods before appropriate training and validation data exist is a critical step towards understanding subtle long-term impacts of climate change. These trends cannot be properly understood and distinguished from individual disturbance events or decadal cycles [...] Read more.
The ability to accurately classify land cover in periods before appropriate training and validation data exist is a critical step towards understanding subtle long-term impacts of climate change. These trends cannot be properly understood and distinguished from individual disturbance events or decadal cycles using only a decade or less of data. Understanding these long-term changes in low lying coastal areas, home to a huge proportion of the global population, is of particular importance. Relatively simple deep learning models that extract representative spatiotemporal patterns can lead to major improvements in temporal generalizability. To provide insight into major changes in low lying coastal areas, our study (1) developed a recurrent convolutional neural network that incorporates spectral, spatial, and temporal contexts for predicting land cover class, (2) evaluated this model across time and space and compared this model to conventional Random Forest and Support Vector Machine methods as well as other deep learning approaches, and (3) applied this model to classify land cover across 20 years of Landsat 5 data in the low-lying coastal plain of North Carolina, USA. We observed striking changes related to sea level rise that support evidence on a smaller scale of agricultural land and forests transitioning into wetlands and “ghost forests”. This work demonstrates that recurrent convolutional neural networks should be considered when a model is needed that can generalize across time and that they can help uncover important trends necessary for understanding and responding to climate change in vulnerable coastal regions. Full article
Show Figures

Figure 1

23 pages, 3661 KiB  
Article
Novel Intelligent Spatiotemporal Grid Earthquake Early-Warning Model
by Daoye Zhu, Yi Yang, Fuhu Ren, Shunji Murai, Chengqi Cheng and Min Huang
Remote Sens. 2021, 13(17), 3426; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13173426 - 29 Aug 2021
Cited by 2 | Viewed by 1793
Abstract
The integration analysis of multi-type geospatial information poses challenges to existing spatiotemporal data organization models and analysis models based on deep learning. For earthquake early warning, this study proposes a novel intelligent spatiotemporal grid model based on GeoSOT (SGMG-EEW) for feature fusion of [...] Read more.
The integration analysis of multi-type geospatial information poses challenges to existing spatiotemporal data organization models and analysis models based on deep learning. For earthquake early warning, this study proposes a novel intelligent spatiotemporal grid model based on GeoSOT (SGMG-EEW) for feature fusion of multi-type geospatial data. This model includes a seismic grid sample model (SGSM) and a spatiotemporal grid model based on a three-dimensional group convolution neural network (3DGCNN-SGM). The SGSM solves the problem concerning that the layers of different data types cannot form an ensemble with a consistent data structure and transforms the grid representation of data into grid samples for deep learning. The 3DGCNN-SGM is the first application of group convolution in the deep learning of multi-source geographic information data. It avoids direct superposition calculation of data between different layers, which may negatively affect the deep learning analysis model results. In this study, taking the atmospheric temperature anomaly and historical earthquake precursory data from Japan as an example, an earthquake early warning verification experiment was conducted based on the proposed SGMG-EEW. Five groups of control experiments were designed, namely with the use of atmospheric temperature anomaly data only, use of historical earthquake data only, a non-group convolution control group, a support vector machine control group, and a seismic statistical analysis control group. The results showed that the proposed SGSM is not only compatible with the expression of a single type of spatiotemporal data but can also support multiple types of spatiotemporal data, forming a deep-learning-oriented data structure. Compared with the traditional deep learning model, the proposed 3DGCNN-SGM is more suitable for the integration analysis of multiple types of spatiotemporal data. Full article
Show Figures

Figure 1

Back to TopTop