remotesensing-logo

Journal Browser

Journal Browser

GPU Computing for Geoscience and Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 March 2021) | Viewed by 62441

Special Issue Editors


E-Mail Website
Guest Editor
Space Science and Engineering Center, University of Wisconsin-Madison, 1225 W. Dayton St, Madison, WI 53706, USA
Interests: satellite data compression; high-performance computing in remote sensing; remote sensing image processing; remote sensing forward modeling and inverse problems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical Engineering, National Taipei University of Technology, Taipei 10608, Taiwan
Interests: remote sensing; high performance computing; deep learning; pattern recognition; image processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Technological advances in modern active and passive sensors with higher spectral, spatial and/or temporal resolutions have resulted in a substantial increase in multidimensional data volume. This increase poses a challenge to remote sensing big data processing in a timelier fashion for environmental, commercial or military applications. Parallel, distributed and grid computing facilities and algorithms have become indispensable tools to tackle the issues of processing massive remote sensing data. In recent years, the graphics processing unit (GPU) has evolved into a highly parallel many-core processor with tremendous computing power and high memory bandwidth to offer two to three orders of magnitude speedup over the CPU. A cost-effective GPU-based computer has become an appealing alternative to an expensive CPU-based computer cluster for many researchers performing various scientific and engineering applications. This Special Issue of Remote Sensing aims to present state-of-the-art research in incorporating high-performance computing facilities and algorithms for effective and efficient remote sensing applications. Papers are solicited in, but not limited to, the following areas:

  • Multispectral, hyperspectral, or ultraspectral remote sensing data processing.
  • Microwave, visible, or ultraviolet remote sensing data processing.
  • Synthetic Aperture Radar (SAR) and LiDAR remote sensing data processing.
  • Remote sensing image and video coding/decoding and error correction.
  • Spaceborne, airborne or ground-based sensor design and simulation.
  • Geophysical parameter retrieval from remote sensing data.
  • Remote sensing data assimilation and modeling for weather forecast.
  • Passive and active remote sensing data processing, including image registration, color correction, noise reduction, image tracking, target detection, spectral unmixing, feature extraction, image segmentation, image recognition, data fusion, super-resolution, anomaly detection.
  • Recent development trend of remote sensing technology, including new challenges in discovering high-performance computing solutions for machine learning, artificial intelligent, deep learning, big data, and large-scale computing.

Dr. Bormin Huang
Prof. Yanglang Chang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • GPU computing
  • High-performance computing
  • Parallel, distributed, and grid computing
  • Big data and large-scale computing

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 4733 KiB  
Article
Radar High-Resolution Range Profile Ship Recognition Using Two-Channel Convolutional Neural Networks Concatenated with Bidirectional Long Short-Term Memory
by Chih-Lung Lin, Tsung-Pin Chen, Kuo-Chin Fan, Hsu-Yung Cheng and Chi-Hung Chuang
Remote Sens. 2021, 13(7), 1259; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13071259 - 26 Mar 2021
Cited by 10 | Viewed by 3073
Abstract
Radar automatic target recognition is a critical research topic in radar signal processing. Radar high-resolution range profiles (HRRPs) describe the radar characteristics of a target, that is, the characteristics of the target that is reflected by the microwave emitted by the radar are [...] Read more.
Radar automatic target recognition is a critical research topic in radar signal processing. Radar high-resolution range profiles (HRRPs) describe the radar characteristics of a target, that is, the characteristics of the target that is reflected by the microwave emitted by the radar are implicit in it. In conventional radar HRRP target recognition methods, prior knowledge of the radar is necessary for target recognition. The application of deep-learning methods in HRRPs began in recent years, and most of them are convolutional neural network (CNN) and its variants, and recurrent neural network (RNN) and the combination of RNN and CNN are relatively rarely used. The continuous pulses emitted by the radar hit the ship target, and the received HRRPs of the reflected wave seem to provide the geometric characteristics of the ship target structure. When the radar pulses are transmitted to the ship, different positions on the ship have different structures, so each range cell of the echo reflected in the HRRP will be different, and adjacent structures should also have continuous relational characteristics. This inspired the authors to propose a model to concatenate the features extracted by the two-channel CNN with bidirectional long short-term memory (BiLSTM). Various filters are used in two-channel CNN to extract deep features and fed into the following BiLSTM. The BiLSTM model can effectively capture long-distance dependence, because BiLSTM can be trained to retain critical information and achieve two-way timing dependence. Therefore, the two-way spatial relationship between adjacent range cells can be used to obtain excellent recognition performance. The experimental results revealed that the proposed method is robust and effective for ship recognition. Full article
(This article belongs to the Special Issue GPU Computing for Geoscience and Remote Sensing)
Show Figures

Graphical abstract

13 pages, 999 KiB  
Communication
Hyperspectral Parallel Image Compression on Edge GPUs
by Oscar Ferraz, Vitor Silva and Gabriel Falcao
Remote Sens. 2021, 13(6), 1077; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13061077 - 12 Mar 2021
Cited by 5 | Viewed by 2260
Abstract
Edge applications evolved into a variety of scenarios that include the acquisition and compression of immense amounts of images acquired in space remote environments such as satellites and drones, where characteristics such as power have to be properly balanced with constrained memory and [...] Read more.
Edge applications evolved into a variety of scenarios that include the acquisition and compression of immense amounts of images acquired in space remote environments such as satellites and drones, where characteristics such as power have to be properly balanced with constrained memory and parallel computational resources. The CCSDS-123 is a standard for lossless compression of multispectral and hyperspectral images used in on-board satellites and military drones. This work explores the performance and power of 3 families of low-power heterogeneous Nvidia GPU Jetson architectures, namely the 128-core Nano, the 256-core TX2 and the 512-core Xavier AGX by proposing a parallel solution to the CCSDS-123 compressor on embedded systems, reducing development effort, compared to the production of dedicated circuits, while maintaining low power. This solution parallelizes the predictor on the low-power GPU while the entropy encoders exploit the heterogeneous multiple CPU cores and the GPU concurrently. We report more than 4.4 GSamples/s for the predictor and up to 6.7 Gb/s for the complete system, requiring less than 11 W and providing an efficiency of 611 Mb/s/W. Full article
(This article belongs to the Special Issue GPU Computing for Geoscience and Remote Sensing)
Show Figures

Figure 1

29 pages, 17449 KiB  
Article
Feature Line Embedding Based on Support Vector Machine for Hyperspectral Image Classification
by Ying-Nong Chen, Tipajin Thaipisutikul, Chin-Chuan Han, Tzu-Jui Liu and Kuo-Chin Fan
Remote Sens. 2021, 13(1), 130; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13010130 - 01 Jan 2021
Cited by 23 | Viewed by 3380
Abstract
In this paper, a novel feature line embedding (FLE) algorithm based on support vector machine (SVM), referred to as SVMFLE, is proposed for dimension reduction (DR) and for improving the performance of the generative adversarial network (GAN) in hyperspectral image (HSI) classification. The [...] Read more.
In this paper, a novel feature line embedding (FLE) algorithm based on support vector machine (SVM), referred to as SVMFLE, is proposed for dimension reduction (DR) and for improving the performance of the generative adversarial network (GAN) in hyperspectral image (HSI) classification. The GAN has successfully shown high discriminative capability in many applications. However, owing to the traditional linear-based principal component analysis (PCA) the pre-processing step in the GAN cannot effectively obtain nonlinear information; to overcome this problem, feature line embedding based on support vector machine (SVMFLE) was proposed. The proposed SVMFLE DR scheme is implemented through two stages. In the first scatter matrix calculation stage, FLE within-class scatter matrix, FLE between-scatter matrix, and support vector-based FLE between-class scatter matrix are obtained. Then in the second weight determination stage, the training sample dispersion indices versus the weight of SVM-based FLE between-class matrix are calculated to determine the best weight between-scatter matrices and obtain the final transformation matrix. Since the reduced feature space obtained by the SVMFLE scheme is much more representative and discriminative than that obtained using conventional schemes, the performance of the GAN in HSI classification is higher. The effectiveness of the proposed SVMFLE scheme with GAN or nearest neighbor (NN) classifiers was evaluated by comparing them with state-of-the-art methods and using three benchmark datasets. According to the experimental results, the performance of the proposed SVMFLE scheme with GAN or NN classifiers was higher than that of the state-of-the-art schemes in three performance indices. Accuracies of 96.3%, 89.2%, and 87.0% were obtained for the Salinas, Pavia University, and Indian Pines Site datasets, respectively. Similarly, this scheme with the NN classifier also achieves 89.8%, 86.0%, and 76.2% accuracy rates for these three datasets. Full article
(This article belongs to the Special Issue GPU Computing for Geoscience and Remote Sensing)
Show Figures

Graphical abstract

23 pages, 17407 KiB  
Article
YOLOv3-Based Matching Approach for Roof Region Detection from Drone Images
by Chia-Cheng Yeh, Yang-Lang Chang, Mohammad Alkhaleefah, Pai-Hui Hsu, Weiyong Eng, Voon-Chet Koo, Bormin Huang and Lena Chang
Remote Sens. 2021, 13(1), 127; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13010127 - 01 Jan 2021
Cited by 6 | Viewed by 4229
Abstract
Due to the large data volume, the UAV image stitching and matching suffers from high computational cost. The traditional feature extraction algorithms—such as Scale-Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), and Oriented FAST Rotated BRIEF (ORB)—require heavy computation to extract and [...] Read more.
Due to the large data volume, the UAV image stitching and matching suffers from high computational cost. The traditional feature extraction algorithms—such as Scale-Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), and Oriented FAST Rotated BRIEF (ORB)—require heavy computation to extract and describe features in high-resolution UAV images. To overcome this issue, You Only Look Once version 3 (YOLOv3) combined with the traditional feature point matching algorithms is utilized to extract descriptive features from the drone dataset of residential areas for roof detection. Unlike the traditional feature extraction algorithms, YOLOv3 performs the feature extraction solely on the proposed candidate regions instead of the entire image, thus the complexity of the image matching is reduced significantly. Then, all the extracted features are fed into Structural Similarity Index Measure (SSIM) to identify the corresponding roof region pair between consecutive image sequences. In addition, the candidate corresponding roof pair by our architecture serves as the coarse matching region pair and limits the search range of features matching to only the detected roof region. This further improves the feature matching consistency and reduces the chances of wrong feature matching. Analytical results show that the proposed method is 13× faster than the traditional image matching methods with comparable performance. Full article
(This article belongs to the Special Issue GPU Computing for Geoscience and Remote Sensing)
Show Figures

Graphical abstract

29 pages, 1882 KiB  
Article
Accelerating a Geometrical Approximated PCA Algorithm Using AVX2 and CUDA
by Alina L. Machidon, Octavian M. Machidon, Cătălin B. Ciobanu and Petre L. Ogrutan
Remote Sens. 2020, 12(12), 1918; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12121918 - 13 Jun 2020
Cited by 2 | Viewed by 3940
Abstract
Remote sensing data has known an explosive growth in the past decade. This has led to the need for efficient dimensionality reduction techniques, mathematical procedures that transform the high-dimensional data into a meaningful, reduced representation. Projection Pursuit (PP) based algorithms were shown to [...] Read more.
Remote sensing data has known an explosive growth in the past decade. This has led to the need for efficient dimensionality reduction techniques, mathematical procedures that transform the high-dimensional data into a meaningful, reduced representation. Projection Pursuit (PP) based algorithms were shown to be efficient solutions for performing dimensionality reduction on large datasets by searching low-dimensional projections of the data where meaningful structures are exposed. However, PP faces computational difficulties in dealing with very large datasets—which are common in hyperspectral imaging, thus raising the challenge for implementing such algorithms using the latest High Performance Computing approaches. In this paper, a PP-based geometrical approximated Principal Component Analysis algorithm (gaPCA) for hyperspectral image analysis is implemented and assessed on multi-core Central Processing Units (CPUs), Graphics Processing Units (GPUs) and multi-core CPUs using Single Instruction, Multiple Data (SIMD) AVX2 (Advanced Vector eXtensions) intrinsics, which provide significant improvements in performance and energy usage over the single-core implementation. Thus, this paper presents a cross-platform and cross-language perspective, having several implementations of the gaPCA algorithm in Matlab, Python, C++ and GPU implementations based on NVIDIA Compute Unified Device Architecture (CUDA). The evaluation of the proposed solutions is performed with respect to the execution time and energy consumption. The experimental evaluation has shown not only the advantage of using CUDA programming in implementing the gaPCA algorithm on a GPU in terms of performance and energy consumption, but also significant benefits in implementing it on the multi-core CPU using AVX2 intrinsics. Full article
(This article belongs to the Special Issue GPU Computing for Geoscience and Remote Sensing)
Show Figures

Graphical abstract

17 pages, 2869 KiB  
Article
GPU-Based Soil Parameter Parallel Inversion for PolSAR Data
by Qiang Yin, You Wu, Fan Zhang and Yongsheng Zhou
Remote Sens. 2020, 12(3), 415; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12030415 - 28 Jan 2020
Cited by 7 | Viewed by 2572
Abstract
With the development of polarimetric synthetic aperture radar (PolSAR), quantitative parameter inversion has been seen great progress, especially in the field of soil parameter inversion, which has achieved good results for applications. However, PolSAR data is also often many terabytes large. This huge [...] Read more.
With the development of polarimetric synthetic aperture radar (PolSAR), quantitative parameter inversion has been seen great progress, especially in the field of soil parameter inversion, which has achieved good results for applications. However, PolSAR data is also often many terabytes large. This huge amount of data also directly affects the efficiency of the inversion. Therefore, the efficiency of soil moisture and roughness inversion has become a problem in the application of this PolSAR technique. A parallel realization based on a graphics processing unit (GPU) for multiple inversion models of PolSAR data is proposed in this paper. This method utilizes the high-performance parallel computing capability of a GPU to optimize the realization of the surface inversion models for polarimetric SAR data. Three classical forward scattering models and their corresponding inversion algorithms are analyzed. They are different in terms of polarimetric data requirements, application situation, as well as inversion performance. Specifically, the inversion process of PolSAR data is mainly improved by the use of the high concurrent threads of GPU. According to the inversion process, various optimization strategies are applied, such as the parallel task allocation, and optimizations of instruction level, data storage, data transmission between CPU and GPU. The advantages of a GPU in processing computationally-intensive data are shown in the data experiments, where the efficiency of soil roughness and moisture inversion is increased by one or two orders of magnitude. Full article
(This article belongs to the Special Issue GPU Computing for Geoscience and Remote Sensing)
Show Figures

Graphical abstract

19 pages, 1153 KiB  
Article
Noise Removal from Remote Sensed Images by NonLocal Means with OpenCL Algorithm
by Donatella Granata, Angelo Palombo, Federico Santini and Umberto Amato
Remote Sens. 2020, 12(3), 414; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12030414 - 28 Jan 2020
Cited by 4 | Viewed by 2663
Abstract
We introduce a multi-platform portable implementation of the NonLocal Means methodology aimed at noise removal from remotely sensed images. It is particularly suited for hyperspectral sensors for which real-time applications are not possible with only CPU based algorithms. In the last decades computational [...] Read more.
We introduce a multi-platform portable implementation of the NonLocal Means methodology aimed at noise removal from remotely sensed images. It is particularly suited for hyperspectral sensors for which real-time applications are not possible with only CPU based algorithms. In the last decades computational devices have usually been a compound of cross-vendor sets of specifications (heterogeneous system architecture) that bring together integrated central processing (CPUs) and graphics processor (GPUs) units. However, the lack of standardization resulted in most implementations being too specific to a given architecture, eliminating (or making extremely difficult) code re-usability across different platforms. In order to address this issue, we implement a multi option NonLocal Means algorithm developed using the Open Computing Language (OpenCL) applied to Hyperion hyperspectral images. Experimental results demonstrate the dramatic speed-up reached by the algorithm on GPU with respect to conventional serial algorithms on CPU and portability across different platforms. This makes accurate real time denoising of hyperspectral images feasible. Full article
(This article belongs to the Special Issue GPU Computing for Geoscience and Remote Sensing)
Show Figures

Graphical abstract

26 pages, 5011 KiB  
Article
GPU-Based Lossless Compression of Aurora Spectral Data using Online DPCM
by Jiaojiao Li, Jiaji Wu and Gwanggil Jeon
Remote Sens. 2019, 11(14), 1635; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11141635 - 10 Jul 2019
Cited by 2 | Viewed by 2813
Abstract
It is well known that aurorae have very high research value, but the data volume of aurora spectral data is very large, which brings great challenges to storage and transmission. To alleviate this problem, compression of aurora spectral data is indispensable. This paper [...] Read more.
It is well known that aurorae have very high research value, but the data volume of aurora spectral data is very large, which brings great challenges to storage and transmission. To alleviate this problem, compression of aurora spectral data is indispensable. This paper presents a parallel Compute Unified Device Architecture (CUDA) implementation of the prediction-based online Differential Pulse Code Modulation (DPCM) method for the lossless compression of the aurora spectral data. Two improvements are proposed to improve the compression performance of the online DPCM method. One is on the computing of the prediction coefficients, and the other is on the encoding of the residual. In the CUDA implementation, we proposed a decomposition method for the matrix multiplication to avoid redundant data accesses and calculations. In addition, the CUDA implementation is optimized with a multi-stream technique and multi-graphics processing unit (GPU) technique, respectively. Finally, the average compression time of an aurora spectral image reaches about 0.06 s, which is much less than the 15 s aurora spectral data acquisition time interval and can save a lot of time for transmission and other subsequent tasks. Full article
(This article belongs to the Special Issue GPU Computing for Geoscience and Remote Sensing)
Show Figures

Graphical abstract

25 pages, 22159 KiB  
Article
Fast GPU-Based Enhanced Wiener Filter for Despeckling SAR Data
by Bilel Kanoun, Giampaolo Ferraioli, Vito Pascazio and Gilda Schirinzi
Remote Sens. 2019, 11(12), 1473; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11121473 - 21 Jun 2019
Cited by 7 | Viewed by 3655
Abstract
Speckle noise is presented as an inherent dilemma that affects the image processing field, and in particular synthetic aperture radar images. In order to mitigate the adverse effects caused by this phenomenon, several approaches have been introduced in the scientific community during the [...] Read more.
Speckle noise is presented as an inherent dilemma that affects the image processing field, and in particular synthetic aperture radar images. In order to mitigate the adverse effects caused by this phenomenon, several approaches have been introduced in the scientific community during the last three decades including spatial-based and non-local filtering approaches. However, these proposed techniques suffer from some limitations. In fact, it is very difficult to find an approach that is able, on the one hand, to perform well in terms of noise reduction and image detail preservation and, on the other hand, provide a filtering output solution without high computational complexity and within a short processing time. In this paper, we aim to evaluate the performance of a newly-developed despeckling algorithm, presented as an enhancement of the classical Wiener filter and properly designed to work with a Graphics Processing Unit (GPU). The algorithm is tested on both a simulated framework and real Sentinel-1 SAR data. The results, obtained in comparison with other filters, are interesting and promising. Indeed, the proposed method turns out to be a useful filtering instrument in the case of large images by performing the processing within a limited time and ensuring good speckle noise reduction with a considerable image detail preservation. Full article
(This article belongs to the Special Issue GPU Computing for Geoscience and Remote Sensing)
Show Figures

Graphical abstract

24 pages, 12765 KiB  
Article
High-Speed Ship Detection in SAR Images Based on a Grid Convolutional Neural Network
by Tianwen Zhang and Xiaoling Zhang
Remote Sens. 2019, 11(10), 1206; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11101206 - 21 May 2019
Cited by 146 | Viewed by 6594
Abstract
As an active microwave sensor, synthetic aperture radar (SAR) has the characteristic of all-day and all-weather earth observation, which has become one of the most important means for high-resolution earth observation and global resource management. Ship detection in SAR images is also playing [...] Read more.
As an active microwave sensor, synthetic aperture radar (SAR) has the characteristic of all-day and all-weather earth observation, which has become one of the most important means for high-resolution earth observation and global resource management. Ship detection in SAR images is also playing an increasingly important role in ocean observation and disaster relief. Nowadays, both traditional feature extraction methods and deep learning (DL) methods almost focus on improving ship detection accuracy, and the detection speed is neglected. However, the speed of SAR ship detection is extraordinarily significant, especially in real-time maritime rescue and emergency military decision-making. In order to solve this problem, this paper proposes a novel approach for high-speed ship detection in SAR images based on a grid convolutional neural network (G-CNN). This method improves the detection speed by meshing the input image, inspired by the basic thought of you only look once (YOLO), and using depthwise separable convolution. G-CNN is a brand new network structure proposed by us and it is mainly composed of a backbone convolutional neural network (B-CNN) and a detection convolutional neural network (D-CNN). First, SAR images to be detected are divided into grid cells and each grid cell is responsible for detection of specific ships. Then, the whole image is input into B-CNN to extract features. Finally, ship detection is completed in D-CNN under three scales. We experimented on an open SAR Ship Detection Dataset (SSDD) used by many other scholars and then validated the migration ability of G-CNN on two SAR images from RadarSat-1 and Gaofen-3. The experimental results show that the detection speed of our proposed method is faster than the existing other methods, such as faster-regions convolutional neural network (Faster R-CNN), single shot multi-box detector (SSD), and YOLO, under the same hardware environment with NVIDIA GTX1080 graphics processing unit (GPU) and the detection accuracy is kept within an acceptable range. Our proposed G-CNN ship detection system has great application values in real-time maritime disaster rescue and emergency military strategy formulation. Full article
(This article belongs to the Special Issue GPU Computing for Geoscience and Remote Sensing)
Show Figures

Graphical abstract

14 pages, 3211 KiB  
Article
Ship Detection Based on YOLOv2 for SAR Imagery
by Yang-Lang Chang, Amare Anagaw, Lena Chang, Yi Chun Wang, Chih-Yu Hsiao and Wei-Hong Lee
Remote Sens. 2019, 11(7), 786; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070786 - 02 Apr 2019
Cited by 252 | Viewed by 16555
Abstract
Synthetic aperture radar (SAR) imagery has been used as a promising data source for monitoring maritime activities, and its application for oil and ship detection has been the focus of many previous research studies. Many object detection methods ranging from traditional to deep [...] Read more.
Synthetic aperture radar (SAR) imagery has been used as a promising data source for monitoring maritime activities, and its application for oil and ship detection has been the focus of many previous research studies. Many object detection methods ranging from traditional to deep learning approaches have been proposed. However, majority of them are computationally intensive and have accuracy problems. The huge volume of the remote sensing data also brings a challenge for real time object detection. To mitigate this problem a high performance computing (HPC) method has been proposed to accelerate SAR imagery analysis, utilizing the GPU based computing methods. In this paper, we propose an enhanced GPU based deep learning method to detect ship from the SAR images. The You Only Look Once version 2 (YOLOv2) deep learning framework is proposed to model the architecture and training the model. YOLOv2 is a state-of-the-art real-time object detection system, which outperforms Faster Region-Based Convolutional Network (Faster R-CNN) and Single Shot Multibox Detector (SSD) methods. Additionally, in order to reduce computational time with relatively competitive detection accuracy, we develop a new architecture with less number of layers called YOLOv2-reduced. In the experiment, we use two types of datasets: A SAR ship detection dataset (SSDD) dataset and a Diversified SAR Ship Detection Dataset (DSSDD). These two datasets were used for training and testing purposes. YOLOv2 test results showed an increase in accuracy of ship detection as well as a noticeable reduction in computational time compared to Faster R-CNN. From the experimental results, the proposed YOLOv2 architecture achieves an accuracy of 90.05% and 89.13% on the SSDD and DSSDD datasets respectively. The proposed YOLOv2-reduced architecture has a similarly competent detection performance as YOLOv2, but with less computational time on a NVIDIA TITAN X GPU. The experimental results shows that the deep learning can make a big leap forward in improving the performance of SAR image ship detection. Full article
(This article belongs to the Special Issue GPU Computing for Geoscience and Remote Sensing)
Show Figures

Figure 1

30 pages, 5365 KiB  
Article
Implementation of the Principal Component Analysis onto High-Performance Computer Facilities for Hyperspectral Dimensionality Reduction: Results and Comparisons
by Ernestina Martel, Raquel Lazcano, José López, Daniel Madroñal, Rubén Salvador, Sebastián López, Eduardo Juarez, Raúl Guerra, César Sanz and Roberto Sarmiento
Remote Sens. 2018, 10(6), 864; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10060864 - 01 Jun 2018
Cited by 31 | Viewed by 6495
Abstract
Dimensionality reduction represents a critical preprocessing step in order to increase the efficiency and the performance of many hyperspectral imaging algorithms. However, dimensionality reduction algorithms, such as the Principal Component Analysis (PCA), suffer from their computationally demanding nature, becoming advisable for their implementation [...] Read more.
Dimensionality reduction represents a critical preprocessing step in order to increase the efficiency and the performance of many hyperspectral imaging algorithms. However, dimensionality reduction algorithms, such as the Principal Component Analysis (PCA), suffer from their computationally demanding nature, becoming advisable for their implementation onto high-performance computer architectures for applications under strict latency constraints. This work presents the implementation of the PCA algorithm onto two different high-performance devices, namely, an NVIDIA Graphics Processing Unit (GPU) and a Kalray manycore, uncovering a highly valuable set of tips and tricks in order to take full advantage of the inherent parallelism of these high-performance computing platforms, and hence, reducing the time that is required to process a given hyperspectral image. Moreover, the achieved results obtained with different hyperspectral images have been compared with the ones that were obtained with a field programmable gate array (FPGA)-based implementation of the PCA algorithm that has been recently published, providing, for the first time in the literature, a comprehensive analysis in order to highlight the pros and cons of each option. Full article
(This article belongs to the Special Issue GPU Computing for Geoscience and Remote Sensing)
Show Figures

Figure 1

Back to TopTop