Special Issue "Image Processing and Analysis: Trends in Registration, Data Fusion, 3D Reconstruction and Change Detection"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (1 May 2020).

Special Issue Editors

Dr. Riccardo Roncella
E-Mail Website
Guest Editor
Department of Engineering and Architecture, Università degli Studi di Parma, Parco Area delle Scienze, 181/a, 43124 Parma, PR, Italy
Interests: image matching; image orientation; satellite/airborne/UAV photogrammetry; 3D reconstruction; monitoring; laser scanning; vision metrology
Special Issues and Collections in MDPI journals
Dr. Mattia Previtali
E-Mail Website
Guest Editor
Department of Architecture, Built Environment and Construction Engineering, Politecnico di Milano, Via Giuseppe Ponzio, 31, 20133 Milano, MI, Italy
Interests: image analysis; image matching; multiview reconstruction; laser scanning; point cloud classification; monitoring
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Satellite, aerial, and UAV imaging are constantly evolving, both in terms of data volumes and quality. Earth observation programs, both public and private, are making available to different users (private companies, public administrations, and the scientific community) a growing amount of multi-temporal data, often publicly available, at an increased spatial resolution and with a high revisit time. At the opposite end of the platform scale, UAVs, due to their higher flexibility, are beginning to represent a new paradigm for acquiring high resolution information and can offer mapping capacities unachievable up to a few years ago. Due to this large availability of data, the scope of image-based remote sensing applications and the number of active, often unskilled, users has increased consistently in the last years: today, beside traditional application fields such as land usage and environmental monitoring, cultural heritage, archaeology, precision farming, human activities monitoring, and others are emerging research and practical fields of interest. In such fast-evolving context, automated techniques are therefore needed to process and extract relevant information from such a large amount of data.

The Special Issue focuses on presenting the latest advances of innovative image analysis and image processing techniques and their contribution in a wide range of application fields, in an attempt to foresee where they will lead the discipline and practice in the next years. As far as process automation is concerned, it is of the utmost importance to invest in an appropriate understanding of the algorithmic implementation of the different techniques and identify their maturity as well as applications where their use might leverage their full potential. For this reason, special focusing features might be (i) accuracy: the agreement between the reference (check) and measured data (e.g., accuracy of check point in image orientation or accuracy of testing set in data classification); (ii) completeness: the amount of information obtained from the different methodologies and their space/time-distribution; (iii) reliability: the algorithm consistency, intended as stability to noise, and algorithm robustness, intended as estimation of the measurements’ reliability level and capability to identify gross errors; and (iv) processing speed: the algorithm computational load.

Scope includes but is not limited to the following:

  • image registration and multi-source data integration or fusion methods;
  • cross-calibration of sensors and cross-validation of data/models;
  • orientation in a seamless way of images acquired with different platforms;
  • object extraction and accuracy evaluation in 3D reconstruction;
  • low-cost sensors for mapping and 3D modelling;
  • automation in thematic map production (e.g., spatial and temporal pattern analysis, change detection, and definition of specific change metrics);
  • automatic or semi-automatic procedures to produce down streaming operational services;
  • deep learning methods for data classification and pattern recognition;

Prof. Dr. Riccardo Roncella
Prof. Dr. Mattia Previtali
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Image registration
  • Change detection
  • Pattern recognition
  • Hyperspectral
  • 3D reconstruction
  • Image matching
  • Data fusion
  • Deep learning
  • Multi-sensor
  • Object-based image analysis

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Article
Geo-Registering Consecutive Construction Site Recordings Using a Pre-Registered Reference Module
Remote Sens. 2020, 12(12), 1928; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12121928 - 15 Jun 2020
Cited by 1 | Viewed by 750
Abstract
The monitoring of construction sites and the achieved progress is a vital aspect of construction works in the Architecture, Engineering and Construction (AEC) industry. Only if three-dimensional reconstructions require a limited time, the creation of consecutive datasets, portraying the site in subsequent phases, [...] Read more.
The monitoring of construction sites and the achieved progress is a vital aspect of construction works in the Architecture, Engineering and Construction (AEC) industry. Only if three-dimensional reconstructions require a limited time, the creation of consecutive datasets, portraying the site in subsequent phases, becomes feasible. Moreover, a shared coordinate system between all datasets is essential for monitoring purposes. In this work, a new photogrammetric framework is presented to shift from the current error-prone and tedious manual geo-referencing process to a semi-automated one. The fundament of the method is an accurately processed reference module that repeatedly serves as the starting point for processing subsequent image datasets. By means of overlap between pictures in both datasets, the coordinate system, incorporated in the reference module, is inherited by the subsequent datasets, hence bypassing the indication process. The proposed procedure is able to outperform current methods, while requiring less time considered over all consecutive datasets. In our experiments, we compared an unaltered part of two subsequent datasets, each of them processed via the traditional and our proposed method. The obtained mean disparity was 9 mm, while for the manual approach it was 16 mm. Especially for comparative analyses, the proposed approach yields excellent results as every dataset is registered exactly the same, whereas results diverge more when following manual methods. In conclusion, our approach is favourable over the current one, especially for a multitude of consecutive site reconstructions, as no ground control points (GCPs) must be indicated in each separate subsequent dataset, while yielding similar to even better results. Full article
Show Figures

Graphical abstract

Article
Geometric Recognition of Moving Objects in Monocular Rotating Imagery Using Faster R-CNN
Remote Sens. 2020, 12(12), 1908; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12121908 - 12 Jun 2020
Cited by 2 | Viewed by 936
Abstract
Moving object detection and tracking from image sequences has been extensively studied in a variety of fields. Nevertheless, observing geometric attributes and identifying the detected objects for further investigation of moving behavior has drawn less attention. The focus of this study is to [...] Read more.
Moving object detection and tracking from image sequences has been extensively studied in a variety of fields. Nevertheless, observing geometric attributes and identifying the detected objects for further investigation of moving behavior has drawn less attention. The focus of this study is to determine moving trajectories, object heights, and object recognition using a monocular camera configuration. This paper presents a scheme to conduct moving object recognition with three-dimensional (3D) observation using faster region-based convolutional neural network (Faster R-CNN) with a stationary and rotating Pan Tilt Zoom (PTZ) camera and close-range photogrammetry. The camera motion effects are first eliminated to detect objects that contain actual movement, and a moving object recognition process is employed to recognize the object classes and to facilitate the estimation of their geometric attributes. Thus, this information can further contribute to the investigation of object moving behavior. To evaluate the effectiveness of the proposed scheme quantitatively, first, an experiment with indoor synthetic configuration is conducted, then, outdoor real-life data are used to verify the feasibility based on recall, precision, and F1 index. The experiments have shown promising results and have verified the effectiveness of the proposed method in both laboratory and real environments. The proposed approach calculates the height and speed estimates of the recognized moving objects, including pedestrians and vehicles, and shows promising results with acceptable errors and application potential through existing PTZ camera images at a very low cost. Full article
Show Figures

Graphical abstract

Article
Effect of Image Fusion on Vegetation Index Quality—A Comparative Study from Gaofen-1, Gaofen-2, Gaofen-4, Landsat-8 OLI and MODIS Imagery
Remote Sens. 2020, 12(10), 1550; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12101550 - 13 May 2020
Cited by 7 | Viewed by 1415
Abstract
In recent years, the use of image fusion method has received increasing attention in remote sensing, vegetation cover changes, vegetation indices (VIs) mapping, etc. For making high-resolution and good quality (with low-cost) VI mapping from a fused image, its quality and underlying factors [...] Read more.
In recent years, the use of image fusion method has received increasing attention in remote sensing, vegetation cover changes, vegetation indices (VIs) mapping, etc. For making high-resolution and good quality (with low-cost) VI mapping from a fused image, its quality and underlying factors need to be identified properly. For example, same-sensor image fusion generally has a higher spatial resolution ratio (SRR) (1:3 to 1:5) but multi-sensor fusion has a lower SRR (1:8 to 1:10). In addition to SRR, there might be other factors affecting the fused vegetation index (FVI) result which have not been investigated in detail before. In this research, we used a strategy on image fusion and quality assessment to find the effect of image fusion for VI quality using Gaofen-1 (GF1), Gaofen-2 (GF2), Gaofen-4 (GF4), Landsat-8 OLI, and MODIS imagery with their panchromatic (PAN) and multispectral (MS) bands in low SRR (1:6 to 1:15). For this research, we acquired a total of nine images (4 PAN+5 MS) on the same (almost) date (GF1, GF2, GF4 and MODIS images were acquired on 2017/07/13 and the Landsat-8 OLI image was acquired on 2017/07/17). The results show that image fusion has the least impact on Green Normalized Vegetation Index (GNDVI) and Atmospherically Resistant Vegetation Index (ARVI) compared to other VIs. The quality of VI is mostly insensitive with image fusion except for the high-pass filter (HPF) algorithm. The subjective and objective quality evaluation shows that Gram-Schmidt (GS) fusion has the least impact on FVI quality, and with decreasing SRR, the FVI quality is decreasing at a slow rate. FVI quality varies with types image fusion algorithms and SRR along with spectral response function (SRF) and signal-to-noise ratio (SNR). However, the FVI quality seems good even for small SRR (1:6 to 1:15 or lower) as long as they have good SNR and minimum SRF effect. The findings of this study could be cost-effective and highly applicable for high-quality VI mapping even in small SRR (1:15 or even lower). Full article
Show Figures

Graphical abstract

Article
A Deep Learning-Based Robust Change Detection Approach for Very High Resolution Remotely Sensed Images with Multiple Features
Remote Sens. 2020, 12(9), 1441; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12091441 - 02 May 2020
Cited by 1 | Viewed by 1215
Abstract
Very high-resolution remote sensing change detection has always been an important research issue due to the registration error, robustness of the method, and monitoring accuracy, etc. This paper proposes a robust and more accurate approach of change detection (CD), and it is applied [...] Read more.
Very high-resolution remote sensing change detection has always been an important research issue due to the registration error, robustness of the method, and monitoring accuracy, etc. This paper proposes a robust and more accurate approach of change detection (CD), and it is applied on a smaller experimental area, and then extended to a wider range. A feature space, including object features, Visual Geometry Group (VGG) depth features, and texture features, is constructed. The difference image is obtained by considering the contextual information in a radius scalable circular. This is to overcome the registration error caused by the rotation and shift of the instantaneous field of view and also to improve the reliability and robustness of the CD. To enhance the robustness of the U-Net model, the training dataset is constructed manually via various operations, such as blurring the image, increasing noise, and rotating the image. After this, the trained model is used to predict the experimental areas, which achieved 92.3% accuracy. The proposed method is compared with Support Vector Machine (SVM) and Siamese Network, and the check error rate dropped to 7.86%, while the Kappa increased to 0.8254. The results revealed that our method outperforms SVM and Siamese Network. Full article
Show Figures

Graphical abstract

Article
Two-Phase Object-Based Deep Learning for Multi-Temporal SAR Image Change Detection
Remote Sens. 2020, 12(3), 548; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12030548 - 07 Feb 2020
Cited by 7 | Viewed by 1382
Abstract
Change detection is one of the fundamental applications of synthetic aperture radar (SAR) images. However, speckle noise presented in SAR images has a negative effect on change detection, leading to frequent false alarms in the mapping products. In this research, a novel two-phase [...] Read more.
Change detection is one of the fundamental applications of synthetic aperture radar (SAR) images. However, speckle noise presented in SAR images has a negative effect on change detection, leading to frequent false alarms in the mapping products. In this research, a novel two-phase object-based deep learning approach is proposed for multi-temporal SAR image change detection. Compared with traditional methods, the proposed approach brings two main innovations. One is to classify all pixels into three categories rather than two categories: unchanged pixels, changed pixels caused by strong speckle (false changes), and changed pixels formed by real terrain variation (real changes). The other is to group neighbouring pixels into superpixel objects such as to exploit local spatial context. Two phases are designed in the methodology: (1) Generate objects based on the simple linear iterative clustering (SLIC) algorithm, and discriminate these objects into changed and unchanged classes using fuzzy c-means (FCM) clustering and a deep PCANet. The prediction of this Phase is the set of changed and unchanged superpixels. (2) Deep learning on the pixel sets over the changed superpixels only, obtained in the first phase, to discriminate real changes from false changes. SLIC is employed again to achieve new superpixels in the second phase. Low rank and sparse decomposition are applied to these new superpixels to suppress speckle noise significantly. A further clustering step is applied to these new superpixels via FCM. A new PCANet is then trained to classify two kinds of changed superpixels to achieve the final change maps. Numerical experiments demonstrate that, compared with benchmark methods, the proposed approach can distinguish real changes from false changes effectively with significantly reduced false alarm rates, and achieve up to 99.71% change detection accuracy using multi-temporal SAR imagery. Full article
Show Figures

Graphical abstract

Article
PGA-SiamNet: Pyramid Feature-Based Attention-Guided Siamese Network for Remote Sensing Orthoimagery Building Change Detection
Remote Sens. 2020, 12(3), 484; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12030484 - 03 Feb 2020
Cited by 20 | Viewed by 2855
Abstract
In recent years, building change detection has made remarkable progress through using deep learning. The core problems of this technique are the need for additional data (e.g., Lidar or semantic labels) and the difficulty in extracting sufficient features. In this paper, we propose [...] Read more.
In recent years, building change detection has made remarkable progress through using deep learning. The core problems of this technique are the need for additional data (e.g., Lidar or semantic labels) and the difficulty in extracting sufficient features. In this paper, we propose an end-to-end network, called the pyramid feature-based attention-guided Siamese network (PGA-SiamNet), to solve these problems. The network is trained to capture possible changes using a convolutional neural network in a pyramid. It emphasizes the importance of correlation among the input feature pairs by introducing a global co-attention mechanism. Furthermore, we effectively improved the long-range dependencies of the features by utilizing various attention mechanisms and then aggregating the features of the low-level and co-attention level; this helps to obtain richer object information. Finally, we evaluated our method with a publicly available dataset (WHU) building dataset and a new dataset (EV-CD) building dataset. The experiments demonstrate that the proposed method is effective for building change detection and outperforms the existing state-of-the-art methods on high-resolution remote sensing orthoimages in various metrics. Full article
Show Figures

Graphical abstract

Article
A Two-Stream Symmetric Network with Bidirectional Ensemble for Aerial Image Matching
Remote Sens. 2020, 12(3), 465; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12030465 - 02 Feb 2020
Cited by 3 | Viewed by 1105
Abstract
In this paper, we propose a novel method to precisely match two aerial images that were obtained in different environments via a two-stream deep network. By internally augmenting the target image, the network considers the two-stream with the three input images and reflects [...] Read more.
In this paper, we propose a novel method to precisely match two aerial images that were obtained in different environments via a two-stream deep network. By internally augmenting the target image, the network considers the two-stream with the three input images and reflects the additional augmented pair in the training. As a result, the training process of the deep network is regularized and the network becomes robust for the variance of aerial images. Furthermore, we introduce an ensemble method that is based on the bidirectional network, which is motivated by the isomorphic nature of the geometric transformation. We obtain two global transformation parameters without any additional network or parameters, which alleviate asymmetric matching results and enable significant improvement in performance by fusing two outcomes. For the experiment, we adopt aerial images from Google Earth and the International Society for Photogrammetry and Remote Sensing (ISPRS). To quantitatively assess our result, we apply the probability of correct keypoints (PCK) metric, which measures the degree of matching. The qualitative and quantitative results show the sizable gap of performance compared to the conventional methods for matching the aerial images. All code and our trained model, as well as the dataset are available online. Full article
Show Figures

Graphical abstract

Article
Structural Building Damage Detection with Deep Learning: Assessment of a State-of-the-Art CNN in Operational Conditions
Remote Sens. 2019, 11(23), 2765; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11232765 - 24 Nov 2019
Cited by 29 | Viewed by 2912
Abstract
Remotely sensed data can provide the basis for timely and efficient building damage maps that are of fundamental importance to support the response activities following disaster events. However, the generation of these maps continues to be mainly based on the manual extraction of [...] Read more.
Remotely sensed data can provide the basis for timely and efficient building damage maps that are of fundamental importance to support the response activities following disaster events. However, the generation of these maps continues to be mainly based on the manual extraction of relevant information in operational frameworks. Considering the identification of visible structural damages caused by earthquakes and explosions, several recent works have shown that Convolutional Neural Networks (CNN) outperform traditional methods. However, the limited availability of publicly available image datasets depicting structural disaster damages, and the wide variety of sensors and spatial resolution used for these acquisitions (from space, aerial and UAV platforms), have limited the clarity of how these networks can effectively serve First Responder needs and emergency mapping service requirements. In this paper, an advanced CNN for visible structural damage detection is tested to shed some light on what deep learning networks can currently deliver, and its adoption in realistic operational conditions after earthquakes and explosions is critically discussed. The heterogeneous and large datasets collected by the authors covering different locations, spatial resolutions and platforms were used to assess the network performances in terms of transfer learning with specific regard to geographical transferability of the trained network to imagery acquired in different locations. The computational time needed to deliver these maps is also assessed. Results show that quality metrics are influenced by the composition of training samples used in the network. To promote their wider use, three pre-trained networks—optimized for satellite, airborne and UAV image spatial resolutions and viewing angles—are made freely available to the scientific community. Full article
Show Figures

Graphical abstract

Article
Online Correction of the Mutual Miscalibration of Multimodal VIS–IR Sensors and 3D Data on a UAV Platform for Surveillance Applications
Remote Sens. 2019, 11(21), 2469; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11212469 - 23 Oct 2019
Cited by 2 | Viewed by 821
Abstract
Unmanned aerial vehicles (UAVs) are widely used to protect critical infrastructure objects, and they are most often equipped with one or more RGB cameras and, sometimes, with a thermal imaging camera as well. To obtain as much information as possible from them, they [...] Read more.
Unmanned aerial vehicles (UAVs) are widely used to protect critical infrastructure objects, and they are most often equipped with one or more RGB cameras and, sometimes, with a thermal imaging camera as well. To obtain as much information as possible from them, they should be combined or fused. This article presents a situation in which data from RGB (visible, VIS) and thermovision (infrared, IR) cameras and 3D data have been combined in a common coordinate system. A specially designed calibration target was developed to enable the geometric calibration of IR and VIS cameras in the same coordinate system. 3D data are compatible with the VIS coordinate system when the structure from motion (SfM) algorithm is used. The main focus of this article is to provide the spatial coherence between these data in the case of relative camera movement, which usually results in a miscalibration of the system. Therefore, a new algorithm for the detection of sensor system miscalibration, based on phase correlation with automatic calibration correction in real time, is introduced. Full article
Show Figures

Graphical abstract

Article
Improved Piecewise Linear Transformation for Precise Warping of Very-High-Resolution Remote Sensing Images
Remote Sens. 2019, 11(19), 2235; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11192235 - 25 Sep 2019
Cited by 8 | Viewed by 1320
Abstract
A large number of evenly distributed conjugate points (CPs) in entirely overlapping regions of the images are required to achieve successful co-registration between very-high-resolution (VHR) remote sensing images. The CPs are then used to construct a non-linear transformation model that locally warps a [...] Read more.
A large number of evenly distributed conjugate points (CPs) in entirely overlapping regions of the images are required to achieve successful co-registration between very-high-resolution (VHR) remote sensing images. The CPs are then used to construct a non-linear transformation model that locally warps a sensed image to a reference image’s coordinates. Piecewise linear (PL) transformation is largely exploited for warping VHR images because of its superior performance as compared to the other methods. The PL transformation constructs triangular regions on a sensed image from the CPs by applying the Delaunay algorithm, after which the corresponding triangular regions in a reference image are constructed using the same CPs on the image. Each corresponding region in the sensed image is then locally warped to the regions of the reference image through an affine transformation estimated from the CPs on the triangle vertices. The warping performance of the PL transformation shows reliable results, particularly in regions inside the triangles, i.e., within the convex hulls. However, the regions outside the triangles, which are warped when the extrapolated boundary planes are extended using CPs located close to the regions, incur severe geometric distortion. In this study, we propose an effective approach that focuses on the improvement of the warping performance of the PL transformation over the external area of the triangles. Accordingly, the proposed improved piecewise linear (IPL) transformation uses additional pseudo-CPs intentionally extracted from positions on the boundary of the sensed image. The corresponding pseudo-CPs on the reference image are determined by estimating the affine transformation from CPs located close to the pseudo-CPs. The latter are simultaneously used with the former to construct the triangular regions, which are enlarged accordingly. Experiments on both simulated and real datasets, constructed from Worldview-3 and Kompsat-3A satellite images, were conducted to validate the effectiveness of the proposed IPL transformation. That transformation was shown to outperform the existing linear/non-linear transformation models such as an affine, third and fourth polynomials, local weighted mean, and PL. Moreover, we demonstrated that the IPL transformation improved the warping performance over the PL transformation outside the triangular regions by increasing the correlation coefficient values from 0.259 to 0.304, 0.603 to 0.657, and 0.180 to 0.338 in the first, second, and third real datasets, respectively. Full article
Show Figures

Graphical abstract

Article
Spatio-Temporal Data Fusion for Satellite Images Using Hopfield Neural Network
Remote Sens. 2019, 11(18), 2077; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11182077 - 04 Sep 2019
Cited by 10 | Viewed by 1589
Abstract
Spatio-temporal data fusion refers to the technique of combining high temporal resolution from coarse satellite images and high spatial resolution from fine satellite images. However, data availability remains a major limitation in algorithm development. Existing spatio-temporal data fusion algorithms require at least one [...] Read more.
Spatio-temporal data fusion refers to the technique of combining high temporal resolution from coarse satellite images and high spatial resolution from fine satellite images. However, data availability remains a major limitation in algorithm development. Existing spatio-temporal data fusion algorithms require at least one known image pair between the fine and coarse resolution image. However, data which come from two different satellite platforms do not necessarily have an overlap in their overpass times, hence restricting the application of spatio-temporal data fusion. In this paper, a new algorithm named Hopfield Neural Network SPatio-tempOral daTa fusion model (HNN-SPOT) is developed by utilizing the optimization concept in the Hopfield neural network (HNN) for spatio-temporal image fusion. The algorithm derives a synthesized fine resolution image from a coarse spatial resolution satellite image (similar to downscaling), with the use of one fine resolution image taken on an arbitrary date and one coarse image taken on a predicted date. The HNN-SPOT particularly addresses the problem when the fine resolution and coarse resolution images are acquired from different satellite overpass times over the same geographic extent. Both simulated datasets and real datasets over Hong Kong and Australia have been used in the evaluation of HNN-SPOT. Results showed that HNN-SPOT was comparable with an existing fusion algorithm, the spatial and temporal adaptive reflectance fusion model (STARFM). HNN-SPOT assumes consistent spatial structure for the target area between the date of data acquisition and the prediction date. Therefore, it is more applicable to geographical areas with little or no land cover change. It is shown that HNN-SPOT can produce accurate fusion results with >90% of correlation coefficient over consistent land covers. For areas that have undergone land cover changes, HNN-SPOT can still produce a prediction about the outlines and the tone of the features, if they are large enough to be recorded in the coarse resolution image at the prediction date. HNN-SPOT provides a relatively new approach in spatio-temporal data fusion, and further improvements can be made by modifying or adding new goals and constraints in its HNN architecture. Owing to its lower demand for data prerequisites, HNN-SPOT is expected to increase the applicability of fine-scale applications in remote sensing, such as environmental modeling and monitoring. Full article
Show Figures

Graphical abstract

Article
Spatial–Spectral Feature Fusion Coupled with Multi-Scale Segmentation Voting Decision for Detecting Land Cover Change with VHR Remote Sensing Images
Remote Sens. 2019, 11(16), 1903; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11161903 - 14 Aug 2019
Cited by 9 | Viewed by 1546
Abstract
In this article, a novel approach for land cover change detection (LCCD) using very high resolution (VHR) remote sensing images based on spatial–spectral feature fusion and multi-scale segmentation voting decision is proposed. Unlike other traditional methods that have used a single feature without [...] Read more.
In this article, a novel approach for land cover change detection (LCCD) using very high resolution (VHR) remote sensing images based on spatial–spectral feature fusion and multi-scale segmentation voting decision is proposed. Unlike other traditional methods that have used a single feature without post-processing on a raw detection map, the proposed approach uses spatial–spectral features and post-processing strategies to improve detecting accuracies and performance. Our proposed approach involved two stages. First, we explored the spatial features of the VHR remote sensing image to complement the insufficiency of the spectral feature, and then fused the spatial–spectral features with different strategies. Next, the Manhattan distance between the corresponding spatial–spectral feature vectors of the bi-temporal images was employed to measure the change magnitude between the bi-temporal images and generate a change magnitude image (CMI). Second, the use of the Otsu binary threshold algorithm was proposed to divide the CMI into a binary change detection map (BCDM) and a multi-scale segmentation voting decision algorithm to fuse the initial BCDMs as the final change detection map was proposed. Experiments were carried out on three pairs of bi-temporal remote sensing images with VHR remote sensing images. The results were compared with those of the state-of-the-art methods including four popular contextual-based LCCD methods and three post-processing LCCD methods. Experimental comparisons demonstrated that the proposed approach had an advantage over other state-of-the-art techniques in terms of detection accuracies and performance. Full article
Show Figures

Graphical abstract

Article
A Novel Coarse-to-Fine Scheme for Remote Sensing Image Registration Based on SIFT and Phase Correlation
Remote Sens. 2019, 11(15), 1833; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11151833 - 06 Aug 2019
Cited by 10 | Viewed by 1475
Abstract
Automatic image registration has been wildly used in remote sensing applications. However, the feature-based registration method is sometimes inaccurate and unstable for images with large scale difference, grayscale and texture differences. In this manuscript, a coarse-to-fine registration scheme is proposed, which combines the [...] Read more.
Automatic image registration has been wildly used in remote sensing applications. However, the feature-based registration method is sometimes inaccurate and unstable for images with large scale difference, grayscale and texture differences. In this manuscript, a coarse-to-fine registration scheme is proposed, which combines the advantage of feature-based registration and phase correlation-based registration. The scheme consists of four steps. First, feature-based registration method is adopted for coarse registration. A geometrical outlier removal method is applied to improve the accuracy of coarse registration, which uses geometric similarities of inliers. Then, the sensed image is modified through the coarse registration result under affine deformation model. After that, the modified sensed image is registered to the reference image by extended phase correlation. Lastly, the final registration results are calculated by the fusion of the coarse registration and the fine registration. High universality of feature-based registration and high accuracy of extended phase correlation-based registration are both preserved in the proposed method. Experimental results of several different remote sensing images, which come from several published image registration papers, demonstrate the high robustness and accuracy of the proposed method. The evaluation contains root mean square error (RMSE), Laplace mean square error (LMSE) and red–green image registration results. Full article
Show Figures

Graphical abstract

Article
Multi-Scale Semantic Segmentation and Spatial Relationship Recognition of Remote Sensing Images Based on an Attention Model
Remote Sens. 2019, 11(9), 1044; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11091044 - 02 May 2019
Cited by 16 | Viewed by 2178
Abstract
A comprehensive interpretation of remote sensing images involves not only remote sensing object recognition but also the recognition of spatial relations between objects. Especially in the case of different objects with the same spectrum, the spatial relationship can help interpret remote sensing objects [...] Read more.
A comprehensive interpretation of remote sensing images involves not only remote sensing object recognition but also the recognition of spatial relations between objects. Especially in the case of different objects with the same spectrum, the spatial relationship can help interpret remote sensing objects more accurately. Compared with traditional remote sensing object recognition methods, deep learning has the advantages of high accuracy and strong generalizability regarding scene classification and semantic segmentation. However, it is difficult to simultaneously recognize remote sensing objects and their spatial relationship from end-to-end only relying on present deep learning networks. To address this problem, we propose a multi-scale remote sensing image interpretation network, called the MSRIN. The architecture of the MSRIN is a parallel deep neural network based on a fully convolutional network (FCN), a U-Net, and a long short-term memory network (LSTM). The MSRIN recognizes remote sensing objects and their spatial relationship through three processes. First, the MSRIN defines a multi-scale remote sensing image caption strategy and simultaneously segments the same image using the FCN and U-Net on different spatial scales so that a two-scale hierarchy is formed. The output of the FCN and U-Net are masked to obtain the location and boundaries of remote sensing objects. Second, using an attention-based LSTM, the remote sensing image captions include the remote sensing objects (nouns) and their spatial relationships described with natural language. Finally, we designed a remote sensing object recognition and correction mechanism to build the relationship between nouns in captions and object mask graphs using an attention weight matrix to transfer the spatial relationship from captions to objects mask graphs. In other words, the MSRIN simultaneously realizes the semantic segmentation of the remote sensing objects and their spatial relationship identification end-to-end. Experimental results demonstrated that the matching rate between samples and the mask graph increased by 67.37 percentage points, and the matching rate between nouns and the mask graph increased by 41.78 percentage points compared to before correction. The proposed MSRIN has achieved remarkable results. Full article
Show Figures

Graphical abstract

Article
Hyperspectral Image Classification Based on Fusion of Curvature Filter and Domain Transform Recursive Filter
Remote Sens. 2019, 11(7), 833; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070833 - 07 Apr 2019
Cited by 3 | Viewed by 1252
Abstract
In recent decades, in order to enhance the performance of hyperspectral image classification, the spatial information of hyperspectral image obtained by various methods has become a research hotspot. For this work, it proposes a new classification method based on the fusion of two [...] Read more.
In recent decades, in order to enhance the performance of hyperspectral image classification, the spatial information of hyperspectral image obtained by various methods has become a research hotspot. For this work, it proposes a new classification method based on the fusion of two spatial information, which will be classified by a large margin distribution machine (LDM). First, the spatial texture information is extracted from the top of the principal component analysis for hyperspectral images by a curvature filter (CF). Second, the spatial correlation information of a hyperspectral image is completed by using domain transform recursive filter (DTRF). Last, the spatial texture information and correlation information are fused to be classified with LDM. The experimental results of hyperspectral images classification demonstrate that the proposed curvature filter and domain transform recursive filter with LDM(CFDTRF-LDM) method is superior to other classification methods. Full article
Show Figures

Figure 1

Article
Local Deep Descriptor for Remote Sensing Image Feature Matching
Remote Sens. 2019, 11(4), 430; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11040430 - 19 Feb 2019
Cited by 8 | Viewed by 1818
Abstract
Feature matching via local descriptors is one of the most fundamental problems in many computer vision tasks, as well as in the remote sensing image processing community. For example, in terms of remote sensing image registration based on the feature, feature matching is [...] Read more.
Feature matching via local descriptors is one of the most fundamental problems in many computer vision tasks, as well as in the remote sensing image processing community. For example, in terms of remote sensing image registration based on the feature, feature matching is a vital process to determine the quality of transform model. While in the process of feature matching, the quality of feature descriptor determines the matching result directly. At present, the most commonly used descriptor is hand-crafted by the designer’s expertise or intuition. However, it is hard to cover all the different cases, especially for remote sensing images with nonlinear grayscale deformation. Recently, deep learning shows explosive growth and improves the performance of tasks in various fields, especially in the computer vision community. Here, we created remote sensing image training patch samples, named Invar-Dataset in a novel and automatic way, then trained a deep learning convolutional neural network, named DescNet to generate a robust feature descriptor for feature matching. A special experiment was carried out to illustrate that our created training dataset was more helpful to train a network to generate a good feature descriptor. A qualitative experiment was then performed to show that feature descriptor vector learned by the DescNet could be used to register remote sensing images with large gray scale difference successfully. A quantitative experiment was then carried out to illustrate that the feature vector generated by the DescNet could acquire more matched points than those generated by hand-crafted feature Scale Invariant Feature Transform (SIFT) descriptor and other networks. On average, the matched points acquired by DescNet was almost twice those acquired by other methods. Finally, we analyzed the advantages of our created training dataset Invar-Dataset and DescNet and gave the possible development of training deep descriptor network. Full article
Show Figures

Figure 1

Article
Self-Paced Convolutional Neural Network for PolSAR Images Classification
Remote Sens. 2019, 11(4), 424; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11040424 - 19 Feb 2019
Cited by 4 | Viewed by 1769
Abstract
Fully polarimetric synthetic aperture radar (PolSAR) can transmit and receive electromagnetic energy on four polarization channels (HH, HV, VH, VV). The data acquired from four channels have both similarities and complementarities. Utilizing the information between the four channels can considerably improve the performance [...] Read more.
Fully polarimetric synthetic aperture radar (PolSAR) can transmit and receive electromagnetic energy on four polarization channels (HH, HV, VH, VV). The data acquired from four channels have both similarities and complementarities. Utilizing the information between the four channels can considerably improve the performance of PolSAR image classification. Convolutional neural network can be used to extract the channel-spatial features of PolSAR images. Self-paced learning has been demonstrated to be instrumental in enhancing the learning robustness of convolutional neural network. In this paper, a novel classification method for PolSAR images using self-paced convolutional neural network (SPCNN) is proposed. In our method, each pixel is denoted by a 3-dimensional tensor block formed by its scattering intensity values on four channels, Pauli’s RGB values and its neighborhood information. Then, we train SPCNN to extract the channel-spatial features and obtain the classification results. Inspired by self-paced learning, SPCNN learns the easier samples first and gradually involves more difficult samples into the training process. This learning mechanism can make network converge to better values. The proposed method achieved state-of-the-art performances on four real PolSAR dataset. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

Letter
Using ALS Data to Improve Co-Registration of Photogrammetry-Based Point Cloud Data in Urban Areas
Remote Sens. 2020, 12(12), 1943; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12121943 - 16 Jun 2020
Cited by 1 | Viewed by 753
Abstract
Globally, urban areas are rapidly expanding and high-quality remote sensing products are essential to help guide such development towards efficient and sustainable pathways. Here, we present an algorithm to address a common problem in digital aerial photogrammetry (DAP)-based image point clouds: vertical mis-registration. [...] Read more.
Globally, urban areas are rapidly expanding and high-quality remote sensing products are essential to help guide such development towards efficient and sustainable pathways. Here, we present an algorithm to address a common problem in digital aerial photogrammetry (DAP)-based image point clouds: vertical mis-registration. The algorithm uses the ground as inferred from airborne laser scanning (ALS) data as a reference surface and re-aligns individual point clouds to this surface. We demonstrate the effectiveness of the proposed method for the city of Kuopio, in central Finland. Here, we use the standard deviation of the vertical coordinate values as a measure of the mis-registration. We show that such standard deviation decreased substantially (more than 1.0 m) for a large proportion (23.2%) of the study area. Moreover, it was shown that the method performed better in urban and suburban areas, compared to vegetated areas (parks, forested areas, and so on). Hence, we demonstrate that the proposed algorithm is a simple and effective method to improve the quality and usability of DAP-based point clouds in urban areas. Full article
Show Figures

Graphical abstract

Back to TopTop