Next Issue
Volume 12, December-2
Previous Issue
Volume 12, November-2

Remote Sens., Volume 12, Issue 23 (December-1 2020) – 161 articles

Cover Story (view full-size image): The regional transmission characteristics as well as the local emission and external transmission contribution of fine particulate matter in the East of North China Plain were investigated using multisource data. The route of pollutant horizontal transmission was reconstructed based on Himawari-8 Aerosol Optical Depth data. A case study conducted on 22 September 2019 showed the pollutant was mainly transmitted from Tangshan to Dezhou. The transmission speed was greater than near-surface wind speed. The vertical diffusion mainly occurred at low altitude below 1.8 km. The pollution air mass had 2–3 hours diffusion delay from the ground monitoring data. In addition, with the help of the WRF-Chem model, pollution in the northeast was mainly attributed to local emissions, while the southwestern area was mainly affected by external transmissions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
Data Fusion Using a Multi-Sensor Sparse-Based Clustering Algorithm
Remote Sens. 2020, 12(23), 4007; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12234007 - 07 Dec 2020
Cited by 1 | Viewed by 1385
Abstract
The increasing amount of information acquired by imaging sensors in Earth Sciences results in the availability of a multitude of complementary data (e.g., spectral, spatial, elevation) for monitoring of the Earth’s surface. Many studies were devoted to investigating the usage of multi-sensor data [...] Read more.
The increasing amount of information acquired by imaging sensors in Earth Sciences results in the availability of a multitude of complementary data (e.g., spectral, spatial, elevation) for monitoring of the Earth’s surface. Many studies were devoted to investigating the usage of multi-sensor data sets in the performance of supervised learning-based approaches at various tasks (i.e., classification and regression) while unsupervised learning-based approaches have received less attention. In this paper, we propose a new approach to fuse multiple data sets from imaging sensors using a multi-sensor sparse-based clustering algorithm (Multi-SSC). A technique for the extraction of spatial features (i.e., morphological profiles (MPs) and invariant attribute profiles (IAPs)) is applied to high spatial-resolution data to derive the spatial and contextual information. This information is then fused with spectrally rich data such as multi- or hyperspectral data. In order to fuse multi-sensor data sets a hierarchical sparse subspace clustering approach is employed. More specifically, a lasso-based binary algorithm is used to fuse the spectral and spatial information prior to automatic clustering. The proposed framework ensures that the generated clustering map is smooth and preserves the spatial structures of the scene. In order to evaluate the generalization capability of the proposed approach, we investigate its performance not only on diverse scenes but also on different sensors and data types. The first two data sets are geological data sets, which consist of hyperspectral and RGB data. The third data set is the well-known benchmark Trento data set, including hyperspectral and LiDAR data. Experimental results indicate that this novel multi-sensor clustering algorithm can provide an accurate clustering map compared to the state-of-the-art sparse subspace-based clustering algorithms. Full article
(This article belongs to the Special Issue Advanced Multisensor Image Analysis Techniques for Land-Cover Mapping)
Show Figures

Graphical abstract

Article
Random Sample Fitting Method to Determine the Planetary Boundary Layer Height Using Satellite-Based Lidar Backscatter Profiles
Remote Sens. 2020, 12(23), 4006; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12234006 - 07 Dec 2020
Cited by 1 | Viewed by 746
Abstract
The planetary boundary layer height (PBLH) is the atmospheric region closest to the earth’s surface and has important implications on weather forecasting, air quality, and climate research. However, lidar-based methods traditionally used to determine PBLH—such as the ideal profile fitting method (IPF), maximum [...] Read more.
The planetary boundary layer height (PBLH) is the atmospheric region closest to the earth’s surface and has important implications on weather forecasting, air quality, and climate research. However, lidar-based methods traditionally used to determine PBLH—such as the ideal profile fitting method (IPF), maximum gradient method, and wavelet covariance transform—are not only heavily influenced by cloud layers, but also rely heavily on a low signal-to-noise ratio (SNR). Therefore, a random sample fitting (RANSAF) method was proposed for PBLH detection based on combining the random sampling consensus and IPF methods. According to radiosonde measurements, the testing of simulated and satellite-based signals shows that the proposed RANSAF method can reduce the effects of the cloud layer and significantly fluctuating noise on lidar-based PBLH detection better than traditional algorithms. The low PBLH bias derived by the RANSAF method indicates that the improved algorithm has a superior performance in measuring PBLH under a low SNR or when a cloud layer exists where the traditional methods are mostly ineffective. The RANSAF method has the potential to determine regional PBLH on the basis of satellite-based lidar backscatter profiles. Full article
(This article belongs to the Special Issue Remote Sensing of the Atmospheric Boundary Layer)
Show Figures

Graphical abstract

Article
Comparing Forest Structural Attributes Derived from UAV-Based Point Clouds with Conventional Forest Inventories in the Dry Chaco
Remote Sens. 2020, 12(23), 4005; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12234005 - 07 Dec 2020
Viewed by 959
Abstract
Anthropogenic activity leading to forest structural and functional changes needs specific ecological indicators and monitoring techniques. Since decades, forest structure, composition, biomass, and functioning have been studied with ground-based forest inventories. Nowadays, satellites survey the earth, producing imagery at different spatial and temporal [...] Read more.
Anthropogenic activity leading to forest structural and functional changes needs specific ecological indicators and monitoring techniques. Since decades, forest structure, composition, biomass, and functioning have been studied with ground-based forest inventories. Nowadays, satellites survey the earth, producing imagery at different spatial and temporal resolutions. However, measuring the ecological state of large extensions of forest is still challenging. To reconstruct the three-dimensional forest structure, the structure from motion (SfM) algorithm was applied to imagery taken by an unmanned aerial vehicle (UAV). Structural indicators from UAV-SfM products are then compared to forest inventory indicators of 64 circular plots of 1000 m2 in a subtropical dry forest. Our data indicate that the UAV-SfM indicators provide a valuable alternative for ground-based forest inventory’ indicators of the upper canopy structure. Based on the correlation between ground-based measures and UAV-SfM derived indicators, we can state that the UAV-SfM technique provides reliable estimates of the mean and maximum height of the upper canopy. The performance of UAV-SfM techniques to characterize the undergrowth forest structure is low, as UAV-SfM indicators derived from the point cloud in the lower forest strata are not suited to provide correct estimates of the vegetation density in the lower strata. Besides structural information, UAV-SfM derived indicators, such as canopy cover, can provide relevant ecological information as the indicators are related to structural, functional, and/or compositional aspects, such as biomass or compositional dominance. Although UAV-SfM techniques cannot replace the wealth of data collected during ground-based forest inventories, its strength lies in the three-dimensional (3D) monitoring of the tree canopy at cm-scale resolution, and the versatility of the technique to provide multi-temporal datasets of the horizontal and vertical forest structure. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Article
Monitoring Annual Changes of Lake Water Levels and Volumes over 1984–2018 Using Landsat Imagery and ICESat-2 Data
Remote Sens. 2020, 12(23), 4004; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12234004 - 07 Dec 2020
Cited by 2 | Viewed by 912
Abstract
With new Ice, Cloud, and land Elevation Satellite (ICESat)-2 lidar (Light detection and ranging) datasets and classical Landsat imagery, a method was proposed to monitor annual changes of lake water levels and volumes for 35 years dated back to 1980s. Based on the [...] Read more.
With new Ice, Cloud, and land Elevation Satellite (ICESat)-2 lidar (Light detection and ranging) datasets and classical Landsat imagery, a method was proposed to monitor annual changes of lake water levels and volumes for 35 years dated back to 1980s. Based on the proposed method, the annual water levels and volumes of Lake Mead in the USA over 1984–2018 were obtained using only two-year measurements of the ICESat-2 altimetry datasets and all available Landsat observations from 1984 to 2018. During the study period, the estimated annual water levels of Lake Mead agreed well with the in situ measurements, i.e., the R2 and RMSE (Root-mean-square error) were 1.00 and 1.06 m, respectively, and the change rates of lake water levels calculated by our method and the in situ data were −1.36 km3/year and −1.29 km3/year, respectively. The annual water volumes of Lake Mead also agreed well with in situ measurements, i.e., the R2 and RMSE were 1.00 and 0.36 km3, respectively, and the change rates of lake water volumes calculated by our method and in situ data were −0.57 km3/year and −0.58 km3/year, respectively. We found that the ICESat-2 exhibits a great potential to accurately characterize the Earth’s surface topography and can capture signal photons reflected from underwater bottoms up to approximately 10 m in Lake Mead. Using the ICESat-2 datasets with a global coverage and our method, accurately monitoring changes of annual water levels/volumes of lakes—which have good water qualities and experienced significant water level changes—is no longer limited by the time span of the available satellite altimetry datasets, and is potentially achievable over a long-term period. Full article
(This article belongs to the Special Issue Environmental Mapping Using Remote Sensing)
Show Figures

Graphical abstract

Article
Multi-Label Remote Sensing Image Scene Classification by Combining a Convolutional Neural Network and a Graph Neural Network
Remote Sens. 2020, 12(23), 4003; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12234003 - 07 Dec 2020
Cited by 8 | Viewed by 1189
Abstract
As one of the fundamental tasks in remote sensing (RS) image understanding, multi-label remote sensing image scene classification (MLRSSC) is attracting increasing research interest. Human beings can easily perform MLRSSC by examining the visual elements contained in the scene and the spatio-topological relationships [...] Read more.
As one of the fundamental tasks in remote sensing (RS) image understanding, multi-label remote sensing image scene classification (MLRSSC) is attracting increasing research interest. Human beings can easily perform MLRSSC by examining the visual elements contained in the scene and the spatio-topological relationships of these visual elements. However, most of existing methods are limited by only perceiving visual elements but disregarding the spatio-topological relationships of visual elements. With this consideration, this paper proposes a novel deep learning-based MLRSSC framework by combining convolutional neural network (CNN) and graph neural network (GNN), which is termed the MLRSSC-CNN-GNN. Specifically, the CNN is employed to learn the perception ability of visual elements in the scene and generate the high-level appearance features. Based on the trained CNN, one scene graph for each scene is further constructed, where nodes of the graph are represented by superpixel regions of the scene. To fully mine the spatio-topological relationships of the scene graph, the multi-layer-integration graph attention network (GAT) model is proposed to address MLRSSC, where the GAT is one of the latest developments in GNN. Extensive experiments on two public MLRSSC datasets show that the proposed MLRSSC-CNN-GNN can obtain superior performance compared with the state-of-the-art methods. Full article
Show Figures

Graphical abstract

Article
Semiautomated Mapping of Benthic Habitats and Seagrass Species Using a Convolutional Neural Network Framework in Shallow Water Environments
Remote Sens. 2020, 12(23), 4002; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12234002 - 07 Dec 2020
Cited by 1 | Viewed by 626
Abstract
Benthic habitats are structurally complex and ecologically diverse ecosystems that are severely vulnerable to human stressors. Consequently, marine habitats must be mapped and monitored to provide the information necessary to understand ecological processes and lead management actions. In this study, we propose a [...] Read more.
Benthic habitats are structurally complex and ecologically diverse ecosystems that are severely vulnerable to human stressors. Consequently, marine habitats must be mapped and monitored to provide the information necessary to understand ecological processes and lead management actions. In this study, we propose a semiautomated framework for the detection and mapping of benthic habitats and seagrass species using convolutional neural networks (CNNs). Benthic habitat field data from a geo-located towed camera and high-resolution satellite images were integrated to evaluate the proposed framework. Features extracted from pre-trained CNNs and a “bagging of features” (BOF) algorithm was used for benthic habitat and seagrass species detection. Furthermore, the resultant correctly detected images were used as ground truth samples for training and validating CNNs with simple architectures. These CNNs were evaluated for their accuracy in benthic habitat and seagrass species mapping using high-resolution satellite images. Two study areas, Shiraho and Fukido (located on Ishigaki Island, Japan), were used to evaluate the proposed model because seven benthic habitats were classified in the Shiraho area and four seagrass species were mapped in Fukido cove. Analysis showed that the overall accuracy of benthic habitat detection in Shiraho and seagrass species detection in Fukido was 91.5% (7 classes) and 90.4% (4 species), respectively, while the overall accuracy of benthic habitat and seagrass mapping in Shiraho and Fukido was 89.9% and 91.2%, respectively. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Article
Change Detection within Remotely Sensed Satellite Image Time Series via Spectral Analysis
Remote Sens. 2020, 12(23), 4001; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12234001 - 07 Dec 2020
Cited by 12 | Viewed by 1100
Abstract
Jump or break detection within a non-stationary time series is a crucial and challenging problem in a broad range of applications including environmental monitoring. Remotely sensed time series are not only non-stationary and unequally spaced (irregularly sampled) but also noisy due to atmospheric [...] Read more.
Jump or break detection within a non-stationary time series is a crucial and challenging problem in a broad range of applications including environmental monitoring. Remotely sensed time series are not only non-stationary and unequally spaced (irregularly sampled) but also noisy due to atmospheric effects, such as clouds, haze, and smoke. To address this challenge, a robust method of jump detection is proposed based on the Anti-Leakage Least-Squares Spectral Analysis (ALLSSA) along with an appropriate temporal segmentation. This method, namely, Jumps Upon Spectrum and Trend (JUST), can simultaneously search for trends and statistically significant spectral components of each time series segment to identify the potential jumps by considering appropriate weights associated with the time series. JUST is successfully applied to simulated vegetation time series with varying jump location and magnitude, the number of observations, seasonal component, and noises. Using a collection of simulated and real-world vegetation time series in southeastern Australia, it is shown that JUST performs better than Breaks For Additive Seasonal and Trend (BFAST) in identifying jumps within the trend component of time series with various types. Furthermore, JUST is applied to Landsat 8 composites for a forested region in California, U.S., to show its potential in characterizing spatial and temporal changes in a forested landscape. Therefore, JUST is recommended as a robust and alternative change detection method which can consider the observational uncertainties and does not require any interpolations and/or gap fillings. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Article
Crop Yield Prediction Using Multitemporal UAV Data and Spatio-Temporal Deep Learning Models
Remote Sens. 2020, 12(23), 4000; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12234000 - 07 Dec 2020
Cited by 5 | Viewed by 1297
Abstract
Unmanned aerial vehicle (UAV) based remote sensing is gaining momentum worldwide in a variety of agricultural and environmental monitoring and modelling applications. At the same time, the increasing availability of yield monitoring devices in harvesters enables input-target mapping of in-season RGB and crop [...] Read more.
Unmanned aerial vehicle (UAV) based remote sensing is gaining momentum worldwide in a variety of agricultural and environmental monitoring and modelling applications. At the same time, the increasing availability of yield monitoring devices in harvesters enables input-target mapping of in-season RGB and crop yield data in a resolution otherwise unattainable by openly availabe satellite sensor systems. Using time series UAV RGB and weather data collected from nine crop fields in Pori, Finland, we evaluated the feasibility of spatio-temporal deep learning architectures in crop yield time series modelling and prediction with RGB time series data. Using Convolutional Neural Networks (CNN) and Long-Short Term Memory (LSTM) networks as spatial and temporal base architectures, we developed and trained CNN-LSTM, convolutional LSTM and 3D-CNN architectures with full 15 week image frame sequences from the whole growing season of 2018. The best performing architecture, the 3D-CNN, was then evaluated with several shorter frame sequence configurations from the beginning of the season. With 3D-CNN, we were able to achieve 218.9 kg/ha mean absolute error (MAE) and 5.51% mean absolute percentage error (MAPE) performance with full length sequences. The best shorter length sequence performance with the same model was 292.8 kg/ha MAE and 7.17% MAPE with four weekly frames from the beginning of the season. Full article
(This article belongs to the Special Issue Deep Learning Methods for Crop Monitoring and Crop Yield Prediction)
Show Figures

Graphical abstract

Article
Assessment of Spatio-Temporal Landscape Changes from VHR Images in Three Different Permafrost Areas in the Western Russian Arctic
Remote Sens. 2020, 12(23), 3999; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233999 - 07 Dec 2020
Cited by 4 | Viewed by 808
Abstract
Our study highlights the usefulness of very high resolution (VHR) images to detect various types of disturbances over permafrost areas using three example regions in different permafrost zones. The study focuses on detecting subtle changes in land cover classes, thermokarst water bodies, river [...] Read more.
Our study highlights the usefulness of very high resolution (VHR) images to detect various types of disturbances over permafrost areas using three example regions in different permafrost zones. The study focuses on detecting subtle changes in land cover classes, thermokarst water bodies, river dynamics, retrogressive thaw slumps (RTS) and infrastructure in the Yamal Peninsula, Urengoy and Pechora regions. Very high-resolution optical imagery (sub-meter) derived from WorldView, QuickBird and GeoEye in conjunction with declassified Corona images were involved in the analyses. The comparison of very high-resolution images acquired in 2003/2004 and 2016/2017 indicates a pronounced increase in the extent of tundra and a slight increase of land covered by water. The number of water bodies increased in all three regions, especially in discontinuous permafrost, where 14.86% of new lakes and ponds were initiated between 2003 and 2017. The analysis of the evolution of two river channels in Yamal and Urengoy indicates the dominance of erosion during the last two decades. An increase of both rivers’ lengths and a significant widening of the river channels were also observed. The number and total surface of RTS in the Yamal Peninsula strongly increased between 2004 and 2016. A mean annual headwall retreat rate of 1.86 m/year was calculated. Extensive networks of infrastructure occurred in the Yamal Peninsula in the last two decades, stimulating the initiation of new thermokarst features. The significant warming and seasonal variations of the hydrologic cycle, in particular, increased snow water equivalent acted in favor of deepening of the active layer; thus, an increasing number of thermokarst lake formations. Full article
(This article belongs to the Special Issue Environmental Mapping Using Remote Sensing)
Show Figures

Graphical abstract

Article
Characteristics and Seasonal Variations of Cirrus Clouds from Polarization Lidar Observations at a 30°N Plain Site
Remote Sens. 2020, 12(23), 3998; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233998 - 06 Dec 2020
Cited by 2 | Viewed by 625
Abstract
Geometrical and optical characteristics of cirrus clouds were studied based on one year of polarization lidar measurements (3969 h on 228 different days between March 2019 and February 2020) at Wuhan (30.5°N, 114.4°E), China. The cirrus clouds showed an overall occurrence frequency of [...] Read more.
Geometrical and optical characteristics of cirrus clouds were studied based on one year of polarization lidar measurements (3969 h on 228 different days between March 2019 and February 2020) at Wuhan (30.5°N, 114.4°E), China. The cirrus clouds showed an overall occurrence frequency of ~48% and occurrence mid-cloud altitude of ~8–16 km over the 30°N plain site. The mean values of their mid-cloud height and temperature were 11.5 ± 2.0 km and −46.5 ± 10.7 ℃, respectively. The cirrus geometrical thickness tended to decrease with decreasing mid-cloud temperature, with a mean value of 2.5 ± 1.1 km. With the decrease of mid-cloud temperature, the cirrus optical depth (COD) tended to decrease, but the depolarization ratio tended to increase. On average, the COD, lidar ratio, and particle depolarization ratio were respectively 0.30 ± 0.36, 21.6 ± 7.5 sr, and 0.30 ± 0.09 after multiple scattering correction. Out of a total of the observed cirrus events, sub-visual, thin, and dense cirrus clouds accounted for 18%, 51%, and 31%, respectively. The cirrus clouds showed seasonal variations with cloud altitude maximizing in a slightly-shifted summertime (July to September) where the southwesterly wind prevailed and minimizing in winter months. Seasonally-averaged lidar ratio and depolarization ratio showed maximum values in spring and summer, respectively. Furthermore, a positive correlation between the cirrus occurrence frequency and dust column mass density was found in other seasons except for summer, suggesting a heterogeneous ice formation therein. The cirrus cloud characteristics over the lidar site were compared with those observed at low and mid latitudes. Full article
(This article belongs to the Section Atmosphere Remote Sensing)
Show Figures

Graphical abstract

Article
Comprehensive Comparisons of State-of-the-Art Gridded Precipitation Estimates for Hydrological Applications over Southern China
Remote Sens. 2020, 12(23), 3997; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233997 - 06 Dec 2020
Cited by 3 | Viewed by 748
Abstract
Satellite-based precipitation estimates with high quality and spatial-temporal resolutions play a vital role in forcing global or regional meteorological, hydrological, and agricultural models, which are especially useful over large poorly gauged regions. In this study, we apply various statistical indicators to comprehensively analyze [...] Read more.
Satellite-based precipitation estimates with high quality and spatial-temporal resolutions play a vital role in forcing global or regional meteorological, hydrological, and agricultural models, which are especially useful over large poorly gauged regions. In this study, we apply various statistical indicators to comprehensively analyze the quality and compare the performance of five newly released satellite and reanalysis precipitation products against China Merged Precipitation Analysis (CMPA) rain gauge data, respectively, with 0.1° × 0.1° spatial resolution and two temporal scales (daily and hourly) over southern China from June to August in 2019. These include Precipitation Estimates from Remotely Sensed Information using Artificial Neural Networks Cloud Classification System (PERSIANN-CCS), European Center for Medium-Range Weather Forecasts Reanalysis v5 (ERA5-Land), Fengyun-4 (FY-4A), Global Satellite Mapping of Precipitation (GSMaP), and Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (IMERG). Results indicate that: (1) all five products overestimate the accumulated rainfall in the summer, with FY-4A being the most severe; additionally, FY-4A cannot capture the spatial and temporal distribution characteristics of precipitation over southern China. (2) IMERG and GSMaP perform better than the other three datasets at both daily and hourly scales; IMERG correlates slightly better than GSMaP against CMPA data, while it performs worse than GSMaP in terms of probability of detection (POD). (3) ERA5-Land performs better than PERSIANN-CCS and FY-4A at daily scale but shows the worst correlation coefficient (CC), false alarm ratio (FAR), and equitable threat score (ETS) of all precipitation products at hourly scale. (4) The rankings of overall performance on precipitation estimations for this region are IMERG, GSMaP, ERA5-Land, PERSIANN-CCS, and FY-4A at daily scale; and IMERG, GSMaP, PERSIANN-CCS, FY-4A, and ERA5-Land at hourly scale. These findings will provide valuable feedback for improving the current satellite-based precipitation retrieval algorithms and also provide preliminary references for flood forecasting and natural disaster early warning. Full article
(This article belongs to the Special Issue Remote Sensing Applications for Water Scarcity Assessment)
Show Figures

Graphical abstract

Letter
Spatial Scales of Sea Surface Salinity Subfootprint Variability in the SPURS Regions
Remote Sens. 2020, 12(23), 3996; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233996 - 06 Dec 2020
Cited by 1 | Viewed by 557
Abstract
Subfootprint variability (SFV), or representativeness error, is variability within the footprint of a satellite that can impact validation by comparison of in situ and remote sensing data. This study seeks to determine the size of the sea surface salinity (SSS) SFV as a [...] Read more.
Subfootprint variability (SFV), or representativeness error, is variability within the footprint of a satellite that can impact validation by comparison of in situ and remote sensing data. This study seeks to determine the size of the sea surface salinity (SSS) SFV as a function of footprint size in two regions that were heavily sampled with in situ data. The Salinity Processes in the Upper-ocean Regional Studies-1 (SPURS-1) experiment was conducted in the subtropical North Atlantic in the period 2012–2013, whereas the SPURS-2 study was conducted in the tropical eastern North Pacific in the period 2016–2017. SSS SFV was also computed using a high-resolution regional model based on the Regional Ocean Modeling System (ROMS). We computed SFV at footprint sizes ranging from 20 to 100 km for both regions. SFV is strongly seasonal, but for different reasons in the two regions. In the SPURS-1 region, the meso- and submesoscale variability seemed to control the size of the SFV. In the SPURS-2 region, the SFV is much larger than SPURS-1 and controlled by patchy rainfall. Full article
(This article belongs to the Special Issue Moving Forward on Remote Sensing of Sea Surface Salinity)
Show Figures

Figure 1

Article
Learning to Track Aircraft in Infrared Imagery
Remote Sens. 2020, 12(23), 3995; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233995 - 06 Dec 2020
Cited by 1 | Viewed by 638
Abstract
Airborne target tracking in infrared imagery remains a challenging task. The airborne target usually has a low signal-to-noise ratio and shows different visual patterns. The features adopted in the visual tracking algorithm are usually deep features pre-trained on ImageNet, which are not tightly [...] Read more.
Airborne target tracking in infrared imagery remains a challenging task. The airborne target usually has a low signal-to-noise ratio and shows different visual patterns. The features adopted in the visual tracking algorithm are usually deep features pre-trained on ImageNet, which are not tightly coupled with the current video domain and therefore might not be optimal for infrared target tracking. To this end, we propose a new approach to learn the domain-specific features, which can be adapted to the current video online without pre-training on a large datasets. Considering that only a few samples of the initial frame can be used for online training, general feature representations are encoded to the network for a better initialization. The feature learning module is flexible and can be integrated into tracking frameworks based on correlation filters to improve the baseline method. Experiments on airborne infrared imagery are conducted to demonstrate the effectiveness of our tracking algorithm. Full article
(This article belongs to the Special Issue Computer Vision and Deep Learning for Remote Sensing Applications)
Show Figures

Figure 1

Article
Investigating the Susceptibility to Failure of a Rock Cliff by Integrating Structure-from-Motion Analysis and 3D Geomechanical Modelling
Remote Sens. 2020, 12(23), 3994; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233994 - 06 Dec 2020
Cited by 2 | Viewed by 1067
Abstract
Multi-temporal UAV and digital photo surveys have been acquired between 2017 and 2020 on a coastal cliff in soft rocks in South-Eastern Italy for hazard assessment and the corresponding point clouds have been processed and compared. The multi-temporal survey results provide indications of [...] Read more.
Multi-temporal UAV and digital photo surveys have been acquired between 2017 and 2020 on a coastal cliff in soft rocks in South-Eastern Italy for hazard assessment and the corresponding point clouds have been processed and compared. The multi-temporal survey results provide indications of a progressive deepening process of erosion and detachment of blocks from the mid-height portion of the cliff, with the upper stiffer rock stratum working provisionally as a shelf against the risk of general collapse. Based on the DEM model obtained, a three-dimensional geomechanical finite element model has been created and analyzed in order to investigate the general stability of the cliff and to detect the rock portions which are more susceptible to failure. Concerning the evolving erosion process, active in the cliff, the photogrammetric analyses and the modeling simulations result in agreement and a proneness to both local and general instabilities has been achieved. Full article
(This article belongs to the Special Issue Latest Developments in 3D Mapping with Unmanned Aerial Vehicles)
Show Figures

Graphical abstract

Article
Sand Dune Dynamics Exploiting a Fully Automatic Method Using Satellite SAR Data
Remote Sens. 2020, 12(23), 3993; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233993 - 06 Dec 2020
Viewed by 906
Abstract
This work presents an automatic procedure to quantify dune dynamics on isolated barchan dunes exploiting Synthetic Aperture RADAR satellite data. We use C-band datasets, allowing the multi-temporal analysis of dune dynamics in two study areas, one located between the Western Sahara and Mauritania [...] Read more.
This work presents an automatic procedure to quantify dune dynamics on isolated barchan dunes exploiting Synthetic Aperture RADAR satellite data. We use C-band datasets, allowing the multi-temporal analysis of dune dynamics in two study areas, one located between the Western Sahara and Mauritania and the second one located in the South Rayan dune field in Egypt. Our method uses an adaptive parametric thresholding algorithm and common geospatial operations. A quantitative dune dynamics analysis is also performed. We have measured dune migration rates of 2–6 m/year in the NNW-SSE direction and 11–20 m/year NNE-SSW for the South Rayan and West-Sahara dune fields, respectively. To validate our results, we have manually tracked several dunes per study area using Google Earth imagery. Results from both automatic and manual approaches are consistent. Finally, we discuss the advantages and limitations of the approach presented. Full article
(This article belongs to the Special Issue SAR Remote Sensing of Arid Regions)
Show Figures

Graphical abstract

Article
Automatic Extraction of Seismic Landslides in Large Areas with Complex Environments Based on Deep Learning: An Example of the 2018 Iburi Earthquake, Japan
Remote Sens. 2020, 12(23), 3992; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233992 - 06 Dec 2020
Cited by 1 | Viewed by 1004
Abstract
After a major earthquake, the rapid identification and mapping of co-seismic landslides in the whole affected area is of great significance for emergency rescue and loss assessment of seismic hazards. In recent years, researchers have achieved good results in research on a small [...] Read more.
After a major earthquake, the rapid identification and mapping of co-seismic landslides in the whole affected area is of great significance for emergency rescue and loss assessment of seismic hazards. In recent years, researchers have achieved good results in research on a small scale and single environment characteristics of this issue. However, for the whole earthquake-affected area with large scale and complex environments, the correct rate of extracting co-seismic landslides remains low, and there is no ideal method to solve this problem. In this paper, Planet Satellite images with a spatial resolution of 3 m are used to train a seismic landslide recognition model based on the deep learning method to carry out rapid and automatic extraction of landslides triggered by the 2018 Iburi earthquake, Japan. The study area is about 671.87 km2, of which 60% is used to train the model, and the remaining 40% is used to verify the accuracy of the model. The results show that most of the co-seismic landslides can be identified by this method. In this experiment, the verification precision of the model is 0.7965 and the F1 score is 0.8288. This method can intelligently identify and map landslides triggered by earthquakes from Planet images. It has strong practicability and high accuracy. It can provide assistance for earthquake emergency rescue and rapid disaster assessment. Full article
Show Figures

Graphical abstract

Article
Adaptive Iterated Shrinkage Thresholding-Based Lp-Norm Sparse Representation for Hyperspectral Imagery Target Detection
Remote Sens. 2020, 12(23), 3991; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233991 - 06 Dec 2020
Cited by 3 | Viewed by 558
Abstract
In recent years, with the development of compressed sensing theory, sparse representation methods have been concerned by many researchers. Sparse representation can approximate the original image information with less space storage. Sparse representation has been investigated for hyperspectral imagery (HSI) detection, where approximation [...] Read more.
In recent years, with the development of compressed sensing theory, sparse representation methods have been concerned by many researchers. Sparse representation can approximate the original image information with less space storage. Sparse representation has been investigated for hyperspectral imagery (HSI) detection, where approximation of testing pixel can be obtained by solving l1-norm minimization. However, l1-norm minimization does not always yield a sufficiently sparse solution when a dictionary is not large enough or atoms present a certain level of coherence. Comparatively, non-convex minimization problems, such as the lp penalties, need much weaker incoherence constraint conditions and may achieve more accurate approximation. Hence, we propose a novel detection algorithm utilizing sparse representation with lp-norm and propose adaptive iterated shrinkage thresholding method (AISTM) for lp-norm non-convex sparse coding. Target detection is implemented by representation of the all pixels employing homogeneous target dictionary (HTD), and the output is generated according to the representation residual. Experimental results for four real hyperspectral datasets show that the detection performance of the proposed method is improved by about 10% to 30% than methods mentioned in the paper, such as matched filter (MF), sparse and low-rank matrix decomposition (SLMD), adaptive cosine estimation (ACE), constrained energy minimization (CEM), one-class support vector machine (OC-SVM), the original sparse representation detector with l1-norm, and combined sparse and collaborative representation (CSCR). Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Article
Application of Lithological Mapping Based on Advanced Hyperspectral Imager (AHSI) Imagery Onboard Gaofen-5 (GF-5) Satellite
Remote Sens. 2020, 12(23), 3990; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233990 - 06 Dec 2020
Cited by 1 | Viewed by 952
Abstract
The Advanced Hyperspectral Imager (AHSI), carried by the Gaofen-5 (GF-5) satellite, is the first hyperspectral sensor that simultaneously offers broad coverage and a broad spectrum. Meanwhile, deep-learning-based approaches are emerging to manage the growing volume of data produced by satellites. However, the application [...] Read more.
The Advanced Hyperspectral Imager (AHSI), carried by the Gaofen-5 (GF-5) satellite, is the first hyperspectral sensor that simultaneously offers broad coverage and a broad spectrum. Meanwhile, deep-learning-based approaches are emerging to manage the growing volume of data produced by satellites. However, the application potential of GF-5 AHSI imagery in lithological mapping using deep-learning-based methods is currently unknown. This paper assessed GF-5 AHSI imagery for lithological mapping in comparison with Shortwave Infrared Airborne Spectrographic Imager (SASI) data. A multi-scale 3D deep convolutional neural network (M3D-DCNN), a hybrid spectral CNN (HybridSN), and a spectral–spatial unified network (SSUN) were selected to verify the applicability and stability of deep-learning-based methods through comparison with support vector machine (SVM) based on six datasets constructed by GF-5 AHSI, Sentinel-2A, and SASI imagery. The results show that all methods produce classification results with accuracy greater than 90% on all datasets, and M3D-DCNN is both more accurate and more stable. It can produce especially encouraging results by just using the short-wave infrared wavelength subset (SWIR bands) of GF-5 AHSI data. Accordingly, GF-5 AHSI imagery could provide impressive results and its SWIR bands have a high signal-to-noise ratio (SNR), which meets the requirements of large-scale and large-area lithological mapping. And M3D-DCNN method is recommended for use in lithological mapping based on GF-5 AHSI hyperspectral data. Full article
Show Figures

Graphical abstract

Article
High Resolution Digital Terrain Models of Mercury
Remote Sens. 2020, 12(23), 3989; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233989 - 06 Dec 2020
Cited by 2 | Viewed by 1040
Abstract
We refined our Shape from Shading (SfS) algorithm, which has previously been used to create digital terrain models (DTMs) of the Lunar and Martian surfaces, to generate high-resolution DTMs of Mercury from MESSENGER imagery. To adapt the reconstruction procedure to the specific conditions [...] Read more.
We refined our Shape from Shading (SfS) algorithm, which has previously been used to create digital terrain models (DTMs) of the Lunar and Martian surfaces, to generate high-resolution DTMs of Mercury from MESSENGER imagery. To adapt the reconstruction procedure to the specific conditions of Mercury and the available imagery, we introduced two methodic innovations. First, we extended the SfS algorithm to enable the 3D-reconstruction from image mosaics. Because most mosaic tiles were acquired at different times and under various illumination conditions, the brightness of adjacent tiles may vary. Brightness variations that are not fully captured by the reflectance model may yield discontinuities at tile borders. We found that the relaxation of the constraint for a continuous albedo map improves the topographic results of an extensive region removing discontinuities at tile borders. The second innovation enables the generation of accurate DTMs from images with substantial albedo variations, such as hollows. We employed an iterative procedure that initializes the SfS algorithm with the albedo map that was obtained by the previous iteration step. This approach converges and yields a reasonable albedo map and topography. With these approaches, we generated DTMs of several science targets such as the Rachmaninoff basin, Praxiteles crater, fault lines, and several hollows. To evaluate the results, we compared our DTMs with stereo DTMs and laser altimeter data. In contrast to coarse laser altimetry tracks and stereo algorithms, which tend to be affected by interpolation artifacts, SfS can generate DTMs almost at image resolution. The root mean squared errors (RMSE) at our target sites are below the size of the horizontal image resolution. For some targets, we could achieve an effective resolution of less than 10 m/pixel, which is the best resolution of Mercury to date. We critically discuss the limitations of the evaluation methodology. Full article
(This article belongs to the Special Issue Planetary 3D Mapping, Remote Sensing and Machine Learning)
Show Figures

Graphical abstract

Article
An Improved Correction Method of Nighttime Light Data Based on EVI and WorldPop Data
Remote Sens. 2020, 12(23), 3988; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233988 - 06 Dec 2020
Cited by 2 | Viewed by 702
Abstract
Defense Meteorological Satellite Program’s Operational Linescan System (DMSP/OLS) data has the shortcomings of discontinuous and pixel saturation effect. It was also incompatible with the Soumi National Polar-Orbiting Partnership Visible Infrared Imaging Radiometer Suite (NPP/VIIRS) data. In view those shortcomings, this research put forward [...] Read more.
Defense Meteorological Satellite Program’s Operational Linescan System (DMSP/OLS) data has the shortcomings of discontinuous and pixel saturation effect. It was also incompatible with the Soumi National Polar-Orbiting Partnership Visible Infrared Imaging Radiometer Suite (NPP/VIIRS) data. In view those shortcomings, this research put forward the WorldPop and the enhanced vegetation index (EVI) adjusted nighttime light (WEANTL) using EVI and WorldPop data to achieve intercalibration and saturation correction of DMSP/OLS data. A long time series of nighttime light images of china from 2001 to 2018 was constructed by fitting the DMSP/OLS data and NPP/VIIRS data. Corrected nighttime light images were examined to discuss the estimation ability of gross domestic product (GDP) and electric power consumption (EPC) on national and provincial scales, respectively. The results indicated that, (1) after correction, the nighttime light (NTL) data can guarantee the growth trend on national and regional scales, and the interannual volatility of the corrected NTL data is lower than that of the uncorrected NTL data; (2) on the national scale, compared with the established model of NTL data and GDP data (NTL-GDP), the determination coefficient (R2) and the mean absolute relative error (MARE) are 0.981 and 8.518%. The R2 and MARE of the established model of NTL data and EPC data (NTL-EPC) were 0.990 and 4.655%; (3) on the provincial scale, the R2 and MARE of NTL-GDP model under the provincial units are 0.7386 and 38.599%. The R2 value and MARE of NTL-EPC model are 0.8927 and 29.319%; (4) on the provincial scale, the R2 and MARE of NTL-GDP model on time series are 0.9667 and 10.877%. The R2 and MARE of NTL-GDP model on time series are 0.9720 and 6.435%; the established TNL-GDP and TNL-EPC models with 30 provinces data all passed the F-test at the 0.001 level; (5) the prediction accuracy of GDP and EPC on time series was nearly 100%. Therefore, the correction method provided in this research can be applied in estimating the GDP and EPC on multiple scales reliably and accurately. Full article
(This article belongs to the Special Issue Remote Sensing of Nighttime Observations)
Show Figures

Figure 1

Article
Water Quality Retrieval from PRISMA Hyperspectral Images: First Experience in a Turbid Lake and Comparison with Sentinel-2
Remote Sens. 2020, 12(23), 3984; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233984 - 06 Dec 2020
Cited by 6 | Viewed by 1965
Abstract
A new era of spaceborne hyperspectral imaging has just begun with the recent availability of data from PRISMA (PRecursore IperSpettrale della Missione Applicativa) launched by the Italian space agency (ASI). There has been pre-launch optimism that the wealth of spectral information offered by [...] Read more.
A new era of spaceborne hyperspectral imaging has just begun with the recent availability of data from PRISMA (PRecursore IperSpettrale della Missione Applicativa) launched by the Italian space agency (ASI). There has been pre-launch optimism that the wealth of spectral information offered by PRISMA can contribute to a variety of aquatic science and management applications. Here, we examine the potential of PRISMA level 2D images in retrieving standard water quality parameters, including total suspended matter (TSM), chlorophyll-a (Chl-a), and colored dissolved organic matter (CDOM) in a turbid lake (Lake Trasimeno, Italy). We perform consistency analyses among the aquatic products (remote sensing reflectance (Rrs) and constituents) derived from PRISMA and those from Sentinel-2. The consistency analyses are expanded to synthesized Sentinel-2 data as well. By spectral downsampling of the PRISMA images, we better isolate the impact of spectral resolution in retrieving the constituents. The retrieval of constituents from both PRISMA and Sentinel-2 images is built upon inverting the radiative transfer model implemented in the Water Color Simulator (WASI) processor. The inversion involves a parameter (gdd) to compensate for atmospheric and sun-glint artifacts. A strong agreement is indicated for the cross-sensor comparison of Rrs products at different wavelengths (average R ≈ 0.87). However, the Rrs of PRISMA at shorter wavelengths (<500 nm) is slightly overestimated with respect to Sentinel-2. This is in line with the estimates of gdd through the inversion that suggests an underestimated atmospheric path radiance of PRISMA level 2D products compared to the atmospherically corrected Sentinel-2 data. The results indicate the high potential of PRISMA level 2D imagery in mapping water quality parameters in Lake Trasimeno. The PRISMA-based retrievals agree well with those of Sentinel-2, particularly for TSM. Full article
(This article belongs to the Special Issue Remote Sensing of Lake Properties and Dynamics)
Show Figures

Graphical abstract

Article
Synergistic Use of Hyperspectral UV-Visible OMI and Broadband Meteorological Imager MODIS Data for a Merged Aerosol Product
Remote Sens. 2020, 12(23), 3987; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233987 - 05 Dec 2020
Cited by 2 | Viewed by 1049
Abstract
The retrieval of optimal aerosol datasets by the synergistic use of hyperspectral ultraviolet (UV)–visible and broadband meteorological imager (MI) techniques was investigated. The Aura Ozone Monitoring Instrument (OMI) Level 1B (L1B) was used as a proxy for hyperspectral UV–visible instrument data to which [...] Read more.
The retrieval of optimal aerosol datasets by the synergistic use of hyperspectral ultraviolet (UV)–visible and broadband meteorological imager (MI) techniques was investigated. The Aura Ozone Monitoring Instrument (OMI) Level 1B (L1B) was used as a proxy for hyperspectral UV–visible instrument data to which the Geostationary Environment Monitoring Spectrometer (GEMS) aerosol algorithm was applied. Moderate-Resolution Imaging Spectroradiometer (MODIS) L1B and dark target aerosol Level 2 (L2) data were used with a broadband MI to take advantage of the consistent time gap between the MODIS and the OMI. First, the use of cloud mask information from the MI infrared (IR) channel was tested for synergy. High-spatial-resolution and IR channels of the MI helped mask cirrus and sub-pixel cloud contamination of GEMS aerosol, as clearly seen in aerosol optical depth (AOD) validation with Aerosol Robotic Network (AERONET) data. Second, dust aerosols were distinguished in the GEMS aerosol-type classification algorithm by calculating the total dust confidence index (TDCI) from MODIS L1B IR channels. Statistical analysis indicates that the Probability of Correct Detection (POCD) between the forward and inversion aerosol dust models (DS) was increased from 72% to 94% by use of the TDCI for GEMS aerosol-type classification, and updated aerosol types were then applied to the GEMS algorithm. Use of the TDCI for DS type classification in the GEMS retrieval procedure gave improved single-scattering albedo (SSA) values for absorbing fine pollution particles (BC) and DS aerosols. Aerosol layer height (ALH) retrieved from GEMS was compared with Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) data, which provides high-resolution vertical aerosol profile information. The CALIOP ALH was calculated from total attenuated backscatter data at 1064 nm, which is identical to the definition of GEMS ALH. Application of the TDCI value reduced the median bias of GEMS ALH data slightly. The GEMS ALH bias approximates zero, especially for GEMS AOD values of >~0.4 and GEMS SSA values of <~0.95. Finally, the AOD products from the GEMS algorithm and MI were used in aerosol merging with the maximum-likelihood estimation method, based on a weighting factor derived from the standard deviation of the original AOD products. With the advantage of the UV–visible channel in retrieving aerosol properties over bright surfaces, the combined AOD products demonstrated better spatial data availability than the original AOD products, with comparable accuracy. Furthermore, pixel-level error analysis of GEMS AOD data indicates improvement through MI synergy. Full article
(This article belongs to the Section Atmosphere Remote Sensing)
Show Figures

Graphical abstract

Article
Driven by Drones: Improving Mangrove Extent Maps Using High-Resolution Remote Sensing
Remote Sens. 2020, 12(23), 3986; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233986 - 05 Dec 2020
Cited by 2 | Viewed by 1258
Abstract
This study investigated how different remote sensing techniques can be combined to accurately monitor mangroves. In this paper, we present a framework to use drone imagery to calculate correction factors which can improve the accuracy of satellite-based mangrove extent. We focus on semi-arid [...] Read more.
This study investigated how different remote sensing techniques can be combined to accurately monitor mangroves. In this paper, we present a framework to use drone imagery to calculate correction factors which can improve the accuracy of satellite-based mangrove extent. We focus on semi-arid dwarf mangroves of Baja California Sur, Mexico, where the mangroves tend to be stunted in height and found in small patches, as well as larger forests. Using a DJI Phantom 4 Pro, we imaged mangroves and labeled the extent by manual classification in QGIS. Using ArcGIS, we compared satellite-based mangrove extent maps from Global Mangrove Watch (GMW) in 2016 and Mexico’s national government agency (National Commission for the Knowledge and Use of Biodiversity, CONABIO) in 2015, with extent maps generated from in situ drone studies in 2018 and 2019. We found that satellite-based extent maps generally overestimated mangrove coverage compared to that of drone-based maps. To correct this overestimation, we developed a method to derive correction factors for GMW mangrove extent. These correction factors correspond to specific pixel patterns generated from a convolution analysis and mangrove coverage defined from drone imagery. We validated our model by using repeated k-fold cross-validation, producing an accuracy of 98.3% ± 2.1%. Overall, drones and satellites are complementary tools, and the rise of machine learning can help stakeholders further leverage the strengths of the two tools, to better monitor mangroves for local, national, and international management. Full article
Show Figures

Graphical abstract

Article
Shadow Detection and Restoration for Hyperspectral Images Based on Nonlinear Spectral Unmixing
Remote Sens. 2020, 12(23), 3985; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233985 - 05 Dec 2020
Cited by 3 | Viewed by 764 | Correction
Abstract
Shadows are frequently observable in high-resolution images, raising challenges in image interpretation, such as classification and object detection. In this paper, we propose a novel framework for shadow detection and restoration of atmospherically corrected hyperspectral images based on nonlinear spectral unmixing. The mixture [...] Read more.
Shadows are frequently observable in high-resolution images, raising challenges in image interpretation, such as classification and object detection. In this paper, we propose a novel framework for shadow detection and restoration of atmospherically corrected hyperspectral images based on nonlinear spectral unmixing. The mixture model is applied pixel-wise as a nonlinear combination of endmembers related to both pure sunlit and shadowed spectra, where the former are manually selected from scenes and the latter are derived from sunlit spectra following physical assumptions. Shadowed pixels are restored by simulating their exposure to sunlight through a combination of sunlit endmembers spectra, weighted by abundance values. The proposed framework is demonstrated on real airborne hyperspectral images. A comprehensive assessment of the restored images is carried out both visually and quantitatively. With respect to binary shadow masks, our framework can produce soft shadow detection results, keeping the natural transition of illumination conditions on shadow boundaries. Our results show that the framework can effectively detect shadows and restore information in shadowed regions. Full article
(This article belongs to the Special Issue Spectral Unmixing of Hyperspectral Remote Sensing Imagery)
Show Figures

Graphical abstract

Article
Building Extraction from High Spatial Resolution Remote Sensing Images via Multiscale-Aware and Segmentation-Prior Conditional Random Fields
Remote Sens. 2020, 12(23), 3983; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233983 - 05 Dec 2020
Cited by 5 | Viewed by 751
Abstract
Building extraction is a binary classification task that separates the building area from the background in remote sensing images. The conditional random field (CRF) is directly modelled by the maximum posterior probability, which can make full use of the spatial neighbourhood information of [...] Read more.
Building extraction is a binary classification task that separates the building area from the background in remote sensing images. The conditional random field (CRF) is directly modelled by the maximum posterior probability, which can make full use of the spatial neighbourhood information of both labelled and observed images. CRF is widely used in building footprint extraction. However, edge oversmoothing still exists when CRF is directly used to extract buildings from high spatial resolution (HSR) remote sensing images. Based on a computer vision multi-scale semantic segmentation network (D-LinkNet), a novel building extraction framework is proposed, named multiscale-aware and segmentation-prior conditional random fields (MSCRF). To solve the problem of losing building details in the downsampling process, D-LinkNet connecting the encoder and decoder is correspondingly used to generate the unary potential. By integrating multi-scale building features in the central module, D-LinkNet can integrate multiscale contextual information without loss of resolution. For the pairwise potential, the segmentation prior is fused to alleviate the influence of spectral diversity between the building and the background area. Moreover, the local class label cost term is introduced. The clear boundaries of the buildings are obtained by using the larger-scale context information. The experimental results demonstrate that the proposed MSCRF framework is superior to the state-of-the-art methods and performs well for building extraction of complex scenes. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

Article
InSAR 3-D Coseismic Displacement Field of the 2015 Mw 7.8 Nepal Earthquake: Insights into Complex Fault Kinematics during the Event
Remote Sens. 2020, 12(23), 3982; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233982 - 05 Dec 2020
Cited by 1 | Viewed by 644
Abstract
The 2015 Mw 7.8 Gorkha, Nepal, earthquake occurred in the central Himalayan collisional orogenic belt, which demonstrated complex fault kinematics and significant surface deformation. The coseismic deformation has been well documented by previous studies using Global Positioning System (GPS) and Interferometric Synthetic Aperture [...] Read more.
The 2015 Mw 7.8 Gorkha, Nepal, earthquake occurred in the central Himalayan collisional orogenic belt, which demonstrated complex fault kinematics and significant surface deformation. The coseismic deformation has been well documented by previous studies using Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data. However, due to some limitations of spatially sparse GPS stations and InSAR only-one-dimensional observation in the line-of-sight (LOS) direction, the complete distribution and detailed spatial variation of the three-dimensional surface deformation field are still not fully understood. In this study, we reconstructed the three-dimensional coseismic deformation fields using multi-view InSAR observations and investigated the refined surface deformation characteristics during this event. We firstly obtained four ascending and descending InSAR coseismic deformation maps from both Sentinel-1A/B and ALOS-2 data. Secondly, we obtained the synthetic north-south deformation field from our best-fitting slip distribution inversions. Finally, we calculated three-dimensional deformation fields, which were consistent with coseismic GPS displacements but with higher resolution. We found that the surface deformation is dominated by horizontal southward motion and vertical uplift and subsidence, with minor east-west deformation. In the north-south direction, the whole deformation area reaches at least 150 × 150 km with a maximum displacement of ~1.5 m. In the vertical direction, two areas, including uplift in the south and subsidence in the north, are mapped with a peak displacement of 1.5 and −1.0 m, respectively. East-west deformation presented a four-quadrant distribution with a maximum displacement of ~0.6 m. Complex thrusting movement occurred on the seismogenic fault; overall, there was southward push motion and wave-shaped fold motion. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

Article
DoMars16k: A Diverse Dataset for Weakly Supervised Geomorphologic Analysis on Mars
Remote Sens. 2020, 12(23), 3981; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233981 - 04 Dec 2020
Cited by 1 | Viewed by 1664
Abstract
Mapping planetary surfaces is an intricate task that forms the basis for many geologic, geomorphologic, and geographic studies of planetary bodies. In this work, we present a method to automate a specific type of planetary mapping, geomorphic mapping, taking machine learning as a [...] Read more.
Mapping planetary surfaces is an intricate task that forms the basis for many geologic, geomorphologic, and geographic studies of planetary bodies. In this work, we present a method to automate a specific type of planetary mapping, geomorphic mapping, taking machine learning as a basis. Additionally, we introduce a novel dataset, termed DoMars16k, which contains 16,150 samples of fifteen different landforms commonly found on the Martian surface. We use a convolutional neural network to establish a relation between Mars Reconnaissance Orbiter Context Camera images and the landforms of the dataset. Afterwards, we employ a sliding-window approach in conjunction with a Markov Random field smoothing to create maps in a weakly supervised fashion. Finally, we provide encouraging results and carry out automated geomorphological analyses of Jezero crater, the Mars2020 landing site, and Oxia Planum, the prospective ExoMars landing site. Full article
(This article belongs to the Special Issue Planetary 3D Mapping, Remote Sensing and Machine Learning)
Show Figures

Graphical abstract

Article
Combining SAR and Optical Earth Observation with Hydraulic Simulation for Flood Mapping and Impact Assessment
Remote Sens. 2020, 12(23), 3980; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233980 - 04 Dec 2020
Cited by 1 | Viewed by 1309
Abstract
Timely mapping, measuring and impact assessment of flood events are crucial for the coordination of flood relief efforts and the elaboration of flood management and risk mitigation plans. However, this task is often challenging and time consuming with traditional land-based techniques. In this [...] Read more.
Timely mapping, measuring and impact assessment of flood events are crucial for the coordination of flood relief efforts and the elaboration of flood management and risk mitigation plans. However, this task is often challenging and time consuming with traditional land-based techniques. In this study, Sentinel-1 radar and Landsat images were utilized in collaboration with hydraulic modelling to obtain flood characteristics and land use/cover (LULC), and to assess flood impact in agricultural areas. Furthermore, indirect estimation of the recurrence interval of a flood event in a poorly gauged catchment was attempted by combining remote sensing (RS) and hydraulic modelling. To this end, a major flood event that occurred in Sperchios river catchment, in Central Greece, which is characterized by extensive farming activity was used as a case study. The synergistic usage of multitemporal RS products and hydraulic modelling has allowed the estimation of flood characteristics, such as extent, inundation depth, peak discharge, recurrence interval and inundation duration, providing valuable information for flood impact estimation and the future examination of flood hazard in poorly gauged basins. The capabilities of the ESA Sentinel-1 mission, which provides improved spatial and temporal analysis, allowing thus the mapping of the extent and temporal dynamics of flood events more accurately and independently from the weather conditions, were also highlighted. Both radar and optical data processing methods, i.e., thresholding, image differencing and water index calculation, provided similar and satisfactory results. Conclusively, multitemporal RS data and hydraulic modelling, with the selected techniques, can provide timely and useful flood observations during and right after flood disasters, applicable in a large part of the world where instrumental hydrological data are scarce and when an apace survey of the condition and information about temporal dynamics in the influenced region is crucial. However, future missions that will reduce further revisiting times will be valuable in this endeavor. Full article
(This article belongs to the Collection Feature Papers for Section Biogeosciences Remote Sensing)
Show Figures

Graphical abstract

Article
Adaptive-SFSDAF for Spatiotemporal Image Fusion that Selectively Uses Class Abundance Change Information
Remote Sens. 2020, 12(23), 3979; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233979 - 04 Dec 2020
Viewed by 513
Abstract
Many spatiotemporal image fusion methods in remote sensing have been developed to blend highly resolved spatial images and highly resolved temporal images to solve the problem of a trade-off between the spatial and temporal resolution from a single sensor. Yet, none of the [...] Read more.
Many spatiotemporal image fusion methods in remote sensing have been developed to blend highly resolved spatial images and highly resolved temporal images to solve the problem of a trade-off between the spatial and temporal resolution from a single sensor. Yet, none of the spatiotemporal fusion methods considers how the various temporal changes between different pixels affect the performance of the fusion results; to develop an improved fusion method, these temporal changes need to be integrated into one framework. Adaptive-SFSDAF extends the existing fusion method that incorporates sub-pixel class fraction change information in Flexible Spatiotemporal DAta Fusion (SFSDAF) by modifying spectral unmixing to select spectral unmixing adaptively in order to greatly improve the efficiency of the algorithm. Accordingly, the main contributions of the proposed adaptive-SFSDAF method are twofold. One is to address the detection of outliers of temporal change in the image during the period between the origin and prediction dates, as these pixels are the most difficult to estimate and affect the performance of the spatiotemporal fusion methods. The other primary contribution is to establish an adaptive unmixing strategy according to the guided mask map, thus effectively eliminating a great number of insignificant unmixed pixels. The proposed method is compared with the state-of-the-art Flexible Spatiotemporal DAta Fusion (FSDAF), SFSDAF, FIT-FC, and Unmixing-Based Data Fusion (UBDF) methods, and the fusion accuracy is evaluated both quantitatively and visually. The experimental results show that adaptive-SFSDAF achieves outstanding performance in balancing computational efficiency and the accuracy of the fusion results. Full article
Show Figures

Graphical abstract

Article
A Grid Feature-Point Selection Method for Large-Scale Street View Image Retrieval Based on Deep Local Features
Remote Sens. 2020, 12(23), 3978; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233978 - 04 Dec 2020
Cited by 1 | Viewed by 548
Abstract
Street view image retrieval aims to estimate the image locations by querying the nearest neighbor images with the same scene from a large-scale reference dataset. Query images usually have no location information and are represented by features to search for similar results. The [...] Read more.
Street view image retrieval aims to estimate the image locations by querying the nearest neighbor images with the same scene from a large-scale reference dataset. Query images usually have no location information and are represented by features to search for similar results. The deep local features (DELF) method shows great performance in the landmark retrieval task, but the method extracts many features so that the feature file is too large to load into memory when training the features index. The memory size is limited, and removing the part of features simply causes a great retrieval precision loss. Therefore, this paper proposes a grid feature-point selection method (GFS) to reduce the number of feature points in each image and minimize the precision loss. Convolutional Neural Networks (CNNs) are constructed to extract dense features, and an attention module is embedded into the network to score features. GFS divides the image into a grid and selects features with local region high scores. Product quantization and an inverted index are used to index the image features to improve retrieval efficiency. The retrieval performance of the method is tested on a large-scale Hong Kong street view dataset, and the results show that the GFS reduces feature points by 32.27–77.09% compared with the raw feature. In addition, GFS has a 5.27–23.59% higher precision than other methods. Full article
Show Figures

Graphical abstract

Previous Issue
Back to TopTop