remotesensing-logo

Journal Browser

Journal Browser

Semantic Segmentation of High-Resolution Images with Deep Learning

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 August 2021) | Viewed by 32684

Special Issue Editors


E-Mail Website
Guest Editor
ICT Convergence Research Center, Kumoh National Institute of Technology, Gumi 39177, Korea
Interests: radio signal processing in 5G networks; signal identification; waveform and modulation recognition; channel estimation in wireless communications; machine learning and deep learning for visual applications and communications
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In this decade, image segmentation is the open research topic in image processing and computer vision, which has especially received more attraction in the era of deep learning (DL). Numerous high-impact DL models of convolutional neural networks (CNNs) and fully convolutional networks (FCNs) have been introduced for semantic segmentation, which have a remarkable performance in wide-ranging applications, from scene understanding for autonomous driving to skin lesion segmentation for medical diagnosis and hyperspectral image segmentation for remote sensing.

Thanks to innovative imaging and aerial photography technology, a large number of aerial hyperspectral and multispectral images can be acquired conveniently and quickly, which is useful for remote sensing applications, such as forest-cover measurement, land-use investigation, and urban-plan estimation. Despite being encouraged by the success of DL-based semantic segmentation for natural images, several segmentation models, which have taken advantage of CNNs and FCNs for pixel-wise classification of remote sensing images (RSIs) including multispectral and hyperspectral images, are facing many challenging issues in high-resolution imagery analysis.

Different from natural images, high-resolution RSIs contain numerous object categories with the presence of redundant object details; therefore, in addition to taking into account the specific characteristics of RSIs (e.g., more channels and higher intensity values), a semantic segmentation method has to effectively handle interclass distinction and intraclass consistence. Additionally, feeding a full high-resolution image as an input to a DL model is nearly impossible, where the computational complexity of a segmentation system increases excessively. Some current approaches accept sacrificing some segmentation accuracy to boost the processing speed of a system via some ideas regarding spatial-based image decomposition. For this Special Issue, we are soliciting original contributions of pioneer researchers on high-performance semantic segmentation of high-resolution RSIs, which exploits deep learning to address the aforementioned theoretical problems.

Dr. Thien Huynh-The
Dr. Sun Le
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Image segmentation
  • Pixel-wise classification
  • Scene/object segmentation
  • Region of interest
  • Deep learning
  • High-resolution/super-pixel remote sensing image segmentation
  • Hyperspectral/multispectral/aerial image analysis

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

17 pages, 3820 KiB  
Article
Convolutional Neural Network for Pansharpening with Spatial Structure Enhancement Operator
by Weiwei Huang, Yan Zhang, Jianwei Zhang and Yuhui Zheng
Remote Sens. 2021, 13(20), 4062; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13204062 - 11 Oct 2021
Cited by 1 | Viewed by 1654
Abstract
Pansharpening aims to fuse the abundant spectral information of multispectral (MS) images and the spatial details of panchromatic (PAN) images, yielding a high-spatial-resolution MS (HRMS) image. Traditional methods only focus on the linear model, ignoring the fact that degradation process is a nonlinear [...] Read more.
Pansharpening aims to fuse the abundant spectral information of multispectral (MS) images and the spatial details of panchromatic (PAN) images, yielding a high-spatial-resolution MS (HRMS) image. Traditional methods only focus on the linear model, ignoring the fact that degradation process is a nonlinear inverse problem. Due to convolutional neural networks (CNNs) having an extraordinary effect in overcoming the shortcomings of traditional linear models, they have been adapted for pansharpening in the past few years. However, most existing CNN-based methods cannot take full advantage of the structural information of images. To address this problem, a new pansharpening method combining a spatial structure enhancement operator with a CNN architecture is employed in this study. The proposed method uses the Sobel operator as an edge-detection operator to extract abundant high-frequency information from the input PAN and MS images, hence obtaining the abundant spatial features of the images. Moreover, we utilize the CNN to acquire the spatial feature maps, preserving the information in both the spatial and spectral domains. Simulated experiments and real-data experiments demonstrated that our method had excellent performance in both quantitative and visual evaluation. Full article
(This article belongs to the Special Issue Semantic Segmentation of High-Resolution Images with Deep Learning)
Show Figures

Figure 1

19 pages, 5225 KiB  
Article
A Cyclic Information–Interaction Model for Remote Sensing Image Segmentation
by Xu Cheng, Lihua Liu and Chen Song
Remote Sens. 2021, 13(19), 3871; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13193871 - 27 Sep 2021
Cited by 3 | Viewed by 2017
Abstract
Object detection and segmentation have recently shown encouraging results toward image analysis and interpretation due to their promising applications in remote sensing image fusion field. Although numerous methods have been proposed, implementing effective and efficient object detection is still very challenging for now, [...] Read more.
Object detection and segmentation have recently shown encouraging results toward image analysis and interpretation due to their promising applications in remote sensing image fusion field. Although numerous methods have been proposed, implementing effective and efficient object detection is still very challenging for now, especially for the limitation of single modal data. The use of a single modal data is not always enough to reach proper spectral and spatial resolutions. The rapid expansion in the number and the availability of multi-source data causes new challenges for their effective and efficient processing. In this paper, we propose an effective feature information–interaction visual attention model for multimodal data segmentation and enhancement, which utilizes channel information to weight self-attentive feature maps of different sources, completing extraction, fusion, and enhancement of global semantic features with local contextual information of the object. Additionally, we further propose an adaptively cyclic feature information–interaction model, which adopts branch prediction to decide the number of visual perceptions, accomplishing adaptive fusion of global semantic features and local fine-grained information. Numerous experiments on several benchmarks show that the proposed approach can achieve significant improvements over baseline model. Full article
(This article belongs to the Special Issue Semantic Segmentation of High-Resolution Images with Deep Learning)
Show Figures

Figure 1

19 pages, 4067 KiB  
Article
Patch-Wise Semantic Segmentation for Hyperspectral Images via a Cubic Capsule Network with EMAP Features
by Le Sun, Xiangbo Song, Huxiang Guo, Guangrui Zhao and Jinwei Wang
Remote Sens. 2021, 13(17), 3497; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13173497 - 03 Sep 2021
Cited by 8 | Viewed by 1908
Abstract
In order to overcome the disadvantages of convolution neural network (CNN) in the current hyperspectral image (HSI) classification/segmentation methods, such as the inability to recognize the rotation of spatial objects, the difficulty to capture the fine spatial features and the problem that principal [...] Read more.
In order to overcome the disadvantages of convolution neural network (CNN) in the current hyperspectral image (HSI) classification/segmentation methods, such as the inability to recognize the rotation of spatial objects, the difficulty to capture the fine spatial features and the problem that principal component analysis (PCA) ignores some important information when it retains few components, in this paper, an HSI segmentation model based on extended multi-morphological attribute profile (EMAP) features and cubic capsule network (EMAP–Cubic-Caps) was proposed. EMAP features can effectively extract various attributes profile features of entities in HSI, and the cubic capsule neural network can effectively capture complex spatial features with more details. Firstly, EMAP algorithm is introduced to extract the morphological attribute profile features of the principal components extracted by PCA, and the EMAP feature map is used as the input of the network. Then, the spectral and spatial low-layer information of the HSI is extracted by a cubic convolution network, and the high-layer information of HSI is extracted by the capsule module, which consists of an initial capsule layer and a digital capsule layer. Through the experimental comparison on three well-known HSI datasets, the superiority of the proposed algorithm in semantic segmentation is validated. Full article
(This article belongs to the Special Issue Semantic Segmentation of High-Resolution Images with Deep Learning)
Show Figures

Graphical abstract

18 pages, 2787 KiB  
Article
Semantic Segmentation of Large-Scale Outdoor Point Clouds by Encoder–Decoder Shared MLPs with Multiple Losses
by Beanbonyka Rim, Ahyoung Lee and Min Hong
Remote Sens. 2021, 13(16), 3121; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163121 - 06 Aug 2021
Cited by 11 | Viewed by 2595
Abstract
Semantic segmentation of large-scale outdoor 3D LiDAR point clouds becomes essential to understand the scene environment in various applications, such as geometry mapping, autonomous driving, and more. With an advantage of being a 3D metric space, 3D LiDAR point clouds, on the other [...] Read more.
Semantic segmentation of large-scale outdoor 3D LiDAR point clouds becomes essential to understand the scene environment in various applications, such as geometry mapping, autonomous driving, and more. With an advantage of being a 3D metric space, 3D LiDAR point clouds, on the other hand, pose a challenge for a deep learning approach, due to their unstructured, unorder, irregular, and large-scale characteristics. Therefore, this paper presents an encoder–decoder shared multi-layer perceptron (MLP) with multiple losses, to address an issue of this semantic segmentation. The challenge rises a trade-off between efficiency and effectiveness in performance. To balance this trade-off, we proposed common mechanisms, which is simple and yet effective, by defining a random point sampling layer, an attention-based pooling layer, and a summation of multiple losses integrated with the encoder–decoder shared MLPs method for the large-scale outdoor point clouds semantic segmentation. We conducted our experiments on the following two large-scale benchmark datasets: Toronto-3D and DALES dataset. Our experimental results achieved an overall accuracy (OA) and a mean intersection over union (mIoU) of both the Toronto-3D dataset, with 83.60% and 71.03%, and the DALES dataset, with 76.43% and 59.52%, respectively. Additionally, our proposed method performed a few numbers of parameters of the model, and faster than PointNet++ by about three times during inferencing. Full article
(This article belongs to the Special Issue Semantic Segmentation of High-Resolution Images with Deep Learning)
Show Figures

Graphical abstract

18 pages, 3910 KiB  
Article
Hybridizing Cross-Level Contextual and Attentive Representations for Remote Sensing Imagery Semantic Segmentation
by Xin Li, Feng Xu, Runliang Xia, Xin Lyu, Hongmin Gao and Yao Tong
Remote Sens. 2021, 13(15), 2986; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13152986 - 29 Jul 2021
Cited by 13 | Viewed by 1850
Abstract
Semantic segmentation of remote sensing imagery is a fundamental task in intelligent interpretation. Since deep convolutional neural networks (DCNNs) performed considerable insight in learning implicit representations from data, numerous works in recent years have transferred the DCNN-based model to remote sensing data analysis. [...] Read more.
Semantic segmentation of remote sensing imagery is a fundamental task in intelligent interpretation. Since deep convolutional neural networks (DCNNs) performed considerable insight in learning implicit representations from data, numerous works in recent years have transferred the DCNN-based model to remote sensing data analysis. However, the wide-range observation areas, complex and diverse objects and illumination and imaging angle influence the pixels easily confused, leading to undesirable results. Therefore, a remote sensing imagery semantic segmentation neural network, named HCANet, is proposed to generate representative and discriminative representations for dense predictions. HCANet hybridizes cross-level contextual and attentive representations to emphasize the distinguishability of learned features. First of all, a cross-level contextual representation module (CCRM) is devised to exploit and harness the superpixel contextual information. Moreover, a hybrid representation enhancement module (HREM) is designed to fuse cross-level contextual and self-attentive representations flexibly. Furthermore, the decoder incorporates DUpsampling operation to boost the efficiency losslessly. The extensive experiments are implemented on the Vaihingen and Potsdam benchmarks. In addition, the results indicate that HCANet achieves excellent performance on overall accuracy and mean intersection over union. In addition, the ablation study further verifies the superiority of CCRM. Full article
(This article belongs to the Special Issue Semantic Segmentation of High-Resolution Images with Deep Learning)
Show Figures

Graphical abstract

20 pages, 2829 KiB  
Article
Spatial-Spectral Network for Hyperspectral Image Classification: A 3-D CNN and Bi-LSTM Framework
by Junru Yin, Changsheng Qi, Qiqiang Chen and Jiantao Qu
Remote Sens. 2021, 13(12), 2353; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13122353 - 16 Jun 2021
Cited by 19 | Viewed by 3113
Abstract
Recently, deep learning methods based on the combination of spatial and spectral features have been successfully applied in hyperspectral image (HSI) classification. To improve the utilization of the spatial and spectral information from the HSI, this paper proposes a unified network framework using [...] Read more.
Recently, deep learning methods based on the combination of spatial and spectral features have been successfully applied in hyperspectral image (HSI) classification. To improve the utilization of the spatial and spectral information from the HSI, this paper proposes a unified network framework using a three-dimensional convolutional neural network (3-D CNN) and a band grouping-based bidirectional long short-term memory (Bi-LSTM) network for HSI classification. In the framework, extracting spectral features is regarded as a procedure of processing sequence data, and the Bi-LSTM network acts as the spectral feature extractor of the unified network to fully exploit the close relationships between spectral bands. The 3-D CNN has a unique advantage in processing the 3-D data; therefore, it is used as the spatial-spectral feature extractor in this unified network. Finally, in order to optimize the parameters of both feature extractors simultaneously, the Bi-LSTM and 3-D CNN share a loss function to form a unified network. To evaluate the performance of the proposed framework, three datasets were tested for HSI classification. The results demonstrate that the performance of the proposed method is better than the current state-of-the-art HSI classification methods. Full article
(This article belongs to the Special Issue Semantic Segmentation of High-Resolution Images with Deep Learning)
Show Figures

Graphical abstract

18 pages, 7558 KiB  
Article
Ghost Elimination via Multi-Component Collaboration for Unmanned Aerial Vehicle Remote Sensing Image Stitching
by Wanli Xue, Zhe Zhang and Shengyong Chen
Remote Sens. 2021, 13(7), 1388; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13071388 - 04 Apr 2021
Cited by 14 | Viewed by 2815
Abstract
Ghosts are a common phenomenon widely present in unmanned aerial vehicle (UAV) remote sensing image stitching that seriously affect the naturalness of stitching results. In order to effectively remove ghosts and produce visually natural stitching results, we propose a novel image stitching method [...] Read more.
Ghosts are a common phenomenon widely present in unmanned aerial vehicle (UAV) remote sensing image stitching that seriously affect the naturalness of stitching results. In order to effectively remove ghosts and produce visually natural stitching results, we propose a novel image stitching method that can identify and eliminate ghosts through multi-component collaboration without object distortion, segmentation or repetition. Specifically, our main contributions are as follows: first, we propose a ghost identification component to locate a potential ghost in the stitching area; and detect significantly moving objects in the two stitched images. In particular, due to the characteristics of UAV shooting, the objects in UAV remote sensing images are small and the image quality is poor. We propose a mesh-based image difference comparison method to identify ghosts; and use an object tracking algorithm to accurately correspond to each ghost pair. Second, we design an image information source selection strategy to generate the ghost replacement region, which can replace the located ghost and avoid object distortion, segmentation and repetition. Third, we find that the process of ghost elimination can produce natural mosaic images by eliminating the ghost caused by initial blending with selected image information source. We validate the proposed method on VIVID data set and compare our method with Homo, ELA, SPW and APAP using the peak signal to noise ratio (PSNR) evaluation indicator. Full article
(This article belongs to the Special Issue Semantic Segmentation of High-Resolution Images with Deep Learning)
Show Figures

Figure 1

18 pages, 7711 KiB  
Article
SSCNN-S: A Spectral-Spatial Convolution Neural Network with Siamese Architecture for Change Detection
by Tianming Zhan, Bo Song, Yang Xu, Minghua Wan, Xin Wang, Guowei Yang and Zebin Wu
Remote Sens. 2021, 13(5), 895; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13050895 - 27 Feb 2021
Cited by 30 | Viewed by 2979
Abstract
In this paper, a spectral-spatial convolution neural network with Siamese architecture (SSCNN-S) for hyperspectral image (HSI) change detection (CD) is proposed. First, tensors are extracted in two HSIs recorded at different time points separately and tensor pairs are constructed. The tensor pairs are [...] Read more.
In this paper, a spectral-spatial convolution neural network with Siamese architecture (SSCNN-S) for hyperspectral image (HSI) change detection (CD) is proposed. First, tensors are extracted in two HSIs recorded at different time points separately and tensor pairs are constructed. The tensor pairs are then incorporated into the spectral-spatial network to obtain two spectral-spatial vectors. Thereafter, the Euclidean distances of the two spectral-spatial vectors are calculated to represent the similarity of the tensor pairs. We use a Siamese network based on contrastive loss to train and optimize the network so that the Euclidean distance output by the network describes the similarity of tensor pairs as accurately as possible. Finally, the values obtained by inputting all tensor pairs into the trained model are used to judge whether a pixel belongs to the change area. SSCNN-S aims to transform the problem of HSI CD into a problem of similarity measurement for tensor pairs by introducing the Siamese network. The network used to extract tensor features in SSCNN-S combines spectral and spatial information to reduce the impact of noise on CD. Additionally, a useful four-test scoring method is proposed to improve the experimental efficiency instead of taking the mean value from multiple measurements. Experiments on real data sets have demonstrated the validity of the SSCNN-S method. Full article
(This article belongs to the Special Issue Semantic Segmentation of High-Resolution Images with Deep Learning)
Show Figures

Figure 1

22 pages, 7361 KiB  
Article
CF2PN: A Cross-Scale Feature Fusion Pyramid Network Based Remote Sensing Target Detection
by Wei Huang, Guanyi Li, Qiqiang Chen, Ming Ju and Jiantao Qu
Remote Sens. 2021, 13(5), 847; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13050847 - 25 Feb 2021
Cited by 49 | Viewed by 3252
Abstract
In the wake of developments in remote sensing, the application of target detection of remote sensing is of increasing interest. Unfortunately, unlike natural image processing, remote sensing image processing involves dealing with large variations in object size, which poses a great challenge to [...] Read more.
In the wake of developments in remote sensing, the application of target detection of remote sensing is of increasing interest. Unfortunately, unlike natural image processing, remote sensing image processing involves dealing with large variations in object size, which poses a great challenge to researchers. Although traditional multi-scale detection networks have been successful in solving problems with such large variations, they still have certain limitations: (1) The traditional multi-scale detection methods note the scale of features but ignore the correlation between feature levels. Each feature map is represented by a single layer of the backbone network, and the extracted features are not comprehensive enough. For example, the SSD network uses the features extracted from the backbone network at different scales directly for detection, resulting in the loss of a large amount of contextual information. (2) These methods combine with inherent backbone classification networks to perform detection tasks. RetinaNet is just a combination of the ResNet-101 classification network and FPN network to perform the detection tasks; however, there are differences in object classification and detection tasks. To address these issues, a cross-scale feature fusion pyramid network (CF2PN) is proposed. First and foremost, a cross-scale fusion module (CSFM) is introduced to extract sufficiently comprehensive semantic information from features for performing multi-scale fusion. Moreover, a feature pyramid for target detection utilizing thinning U-shaped modules (TUMs) performs the multi-level fusion of the features. Eventually, a focal loss in the prediction section is used to control the large number of negative samples generated during the feature fusion process. The new architecture of the network proposed in this paper is verified by DIOR and RSOD dataset. The experimental results show that the performance of this method is improved by 2–12% in the DIOR dataset and RSOD dataset compared with the current SOTA target detection methods. Full article
(This article belongs to the Special Issue Semantic Segmentation of High-Resolution Images with Deep Learning)
Show Figures

Graphical abstract

25 pages, 10173 KiB  
Article
Downscaling Snow Depth Mapping by Fusion of Microwave and Optical Remote-Sensing Data Based on Deep Learning
by Linglong Zhu, Yonghong Zhang, Jiangeng Wang, Wei Tian, Qi Liu, Guangyi Ma, Xi Kan and Ya Chu
Remote Sens. 2021, 13(4), 584; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13040584 - 07 Feb 2021
Cited by 26 | Viewed by 5231
Abstract
Accurate high spatial resolution snow depth mapping in arid and semi-arid regions is of great importance for snow disaster assessment and hydrological modeling. However, due to the complex topography and low spatial-resolution microwave remote-sensing data, the existing snow depth datasets have large errors [...] Read more.
Accurate high spatial resolution snow depth mapping in arid and semi-arid regions is of great importance for snow disaster assessment and hydrological modeling. However, due to the complex topography and low spatial-resolution microwave remote-sensing data, the existing snow depth datasets have large errors and uncertainty, and actual spatiotemporal heterogeneity of snow depth cannot be effectively detected. This paper proposed a deep learning approach based on downscaling snow depth retrieval by fusion of satellite remote-sensing data with multiple spatial scales and diverse characteristics. The (Fengyun-3 Microwave Radiation Imager) FY-3 MWRI data were downscaled to 500 m resolution to match Moderate-resolution Imaging Spectroradiometer (MODIS) snow cover, meteorological and geographic data. A deep neural network was constructed to capture detailed spectral and radiation signals and trained to retrieve the higher spatial resolution snow depth from the aforementioned input data and ground observation. Verified by in situ measurements, downscaled snow depth has the lowest root mean square error (RMSE) and mean absolute error (MAE) (8.16 cm, 4.73 cm respectively) among Environmental and Ecological Science Data Center for West China Snow Depth (WESTDC_SD, 9.38 cm and 5.36 cm), the Microwave Radiation Imager (MWRI) Ascend Snow Depth (MWRI_A_SD, 9.45 cm and 5.49 cm) and MWRI Descend Snow Depth (MWRI_D_SD, 10.55 cm and 6.13 cm) in the study area. Meanwhile, downscaled snow depth could provide more detailed information in spatial distribution, which has been used to analyze the decrease of retrieval accuracy by various topography factors. Full article
(This article belongs to the Special Issue Semantic Segmentation of High-Resolution Images with Deep Learning)
Show Figures

Graphical abstract

Other

Jump to: Research

16 pages, 6112 KiB  
Technical Note
Multiscale Weighted Adjacent Superpixel-Based Composite Kernel for Hyperspectral Image Classification
by Yaokang Zhang and Yunjie Chen
Remote Sens. 2021, 13(4), 820; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13040820 - 23 Feb 2021
Cited by 5 | Viewed by 2061
Abstract
This paper presents a composite kernel method (MWASCK) based on multiscale weighted adjacent superpixels (ASs) to classify hyperspectral image (HSI). The MWASCK adequately exploits spatial-spectral features of weighted adjacent superpixels to guarantee that more accurate spectral features can be extracted. Firstly, we use [...] Read more.
This paper presents a composite kernel method (MWASCK) based on multiscale weighted adjacent superpixels (ASs) to classify hyperspectral image (HSI). The MWASCK adequately exploits spatial-spectral features of weighted adjacent superpixels to guarantee that more accurate spectral features can be extracted. Firstly, we use a superpixel segmentation algorithm to divide HSI into multiple superpixels. Secondly, the similarities between each target superpixel and its ASs are calculated to construct the spatial features. Finally, a weighted AS-based composite kernel (WASCK) method for HSI classification is proposed. In order to avoid seeking for the optimal superpixel scale and fuse the multiscale spatial features, the MWASCK method uses multiscale weighted superpixel neighbor information. Experiments from two real HSIs indicate that superior performance of the WASCK and MWASCK methods compared with some popular classification methods. Full article
(This article belongs to the Special Issue Semantic Segmentation of High-Resolution Images with Deep Learning)
Show Figures

Graphical abstract

Back to TopTop