High-Resolution Earth Observation Systems, Technologies, and Applications

Dear Colleagues,

In the past 20 years, many countries have attached great importance to the high-resolution Earth observation system (EOS), technology, and application. In particular, recent China’s Gaofen series satellites, from Gaofen-1 to Gaofen-13, had been successfully launched into space from 2013 to 2020. Until now, the global high-resolution EOS has covered panchromatic, multispectral, hyperspectral, visible, and microwave wavebands. It is fair to say that various high-resolution EOSs constitute the Earth observation with high spatial resolution, high temporal resolution, and high spectral resolution, which have provided strong support for improving the Earth observation capability.

From the perspective of development trend, the application prospect of high-resolution EOS is very extensive. We believe that more and more new high-resolution EOSs will be launched in the near future. Moreover, the application achievements of high-resolution EOS have been very rich at this stage. Therefore, this multidisciplinary topic aims to invite scholars to publish articles on the latest progress and the development trends of high-resolution EOSs, technologies, and applications.

Potential topics for this Topic include, but are not limited to:

  • Current and future high-resolution EOS and missions
  • Innovative Earth observation sensors, concepts, and techniques
  • Artificial intelligence in EOS remote sensing applications
  • On-board real-time processing of EOS remote sensing images
  • EOS remote sensing image recognition and interpretation
  • Quality improvement of EOS remote sensing images
  • High-precision geometric positioning of EOS remote sensing image
  • Super-resolution processing of EOS remote sensing image
  • Multi-source EOS image fusion
  • Other related topics

Deadline for abstract submissions: 31 October 2021.
Deadline for manuscript submissions: 20 June 2022.

Topic Board

Prof. Dr. Mi Wang
E-Mail Website
Topic Editor-in-Chief
The State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing (LIESMARS), Wuhan University, Wuhan 430079, China
Interests: high-resolution optical satellite remote sensing image processing and application
Prof. Dr. Hanwen Yu
E-Mail Website
Topic Associate Editor-in-Chief
School of Resources and Environment, University of Electronic Science and Technology of China, Chengdu 611731, China
Interests: InSAR signal processing and application; phase unwrapping; algorithm design; machine learning
Special Issues and Collections in MDPI journals
Dr. Jianlai Chen
E-Mail Website
Topic board member
School of Aeronautics and Astronautics, Central South University, Changsha 410083, China
Interests: synthetic aperture radar (SAR) imaging; radar image recognition and interpretation
Dr. Ying Zhu
E-Mail Website
Topic board member
School of Electrical and Information Engineering, Wuhan Institute of Technology, Wuhan 430205, China
Interests: geometric processing; image matching; accuracy analysis and improvement for high-resolution satellite imagery

Relevant Journals List

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Sensors
sensors
3.576 5.8 2001 15.06 Days 2200 CHF Submit
Remote Sensing
remotesensing
4.848 6.6 2009 16.06 Days 2400 CHF Submit

Published Papers (36 papers)

Order results
Result details
Select all
Export citation of selected articles as:
Article
Dual Attention Feature Fusion and Adaptive Context for Accurate Segmentation of Very High-Resolution Remote Sensing Images
Remote Sens. 2021, 13(18), 3715; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13183715 (registering DOI) - 17 Sep 2021
Abstract
Land cover classification of high-resolution remote sensing images aims to obtain pixel-level land cover understanding, which is often modeled as semantic segmentation of remote sensing images. In recent years, convolutional network (CNN)-based land cover classification methods have achieved great advancement. However, previous methods [...] Read more.
Land cover classification of high-resolution remote sensing images aims to obtain pixel-level land cover understanding, which is often modeled as semantic segmentation of remote sensing images. In recent years, convolutional network (CNN)-based land cover classification methods have achieved great advancement. However, previous methods fail to generate fine segmentation results, especially for the object boundary pixels. In order to obtain boundary-preserving predictions, we first propose to incorporate spatially adapting contextual cues. In this way, objects with similar appearance can be effectively distinguished with the extracted global contextual cues, which are very helpful to identify pixels near object boundaries. On this basis, low-level spatial details and high-level semantic cues are effectively fused with the help of our proposed dual attention mechanism. Concretely, when fusing multi-level features, we utilize the dual attention feature fusion module based on both spatial and channel attention mechanisms to relieve the influence of the large gap, and further improve the segmentation accuracy of pixels near object boundaries. Extensive experiments were carried out on the ISPRS 2D Semantic Labeling Vaihingen data and GaoFen-2 data to demonstrate the effectiveness of our proposed method. Our method achieves better performance compared with other state-of-the-art methods. Full article
Show Figures

Figure 1

Article
New Channel Errors Estimation Method for Multichannel SAR Based on Virtual Calibration Source
Remote Sens. 2021, 13(18), 3625; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13183625 - 11 Sep 2021
Abstract
The multichannel synthetic aperture radar (SAR) system can effectively overcome the fundamental limitation between high-resolution and wide-swath. However, the unavoidable channel errors will result in a mismatch of the reconstruction filter and false targets in pairs. To address this issue, a novel channel [...] Read more.
The multichannel synthetic aperture radar (SAR) system can effectively overcome the fundamental limitation between high-resolution and wide-swath. However, the unavoidable channel errors will result in a mismatch of the reconstruction filter and false targets in pairs. To address this issue, a novel channel errors calibration method is proposed based on the idea of minimizing the mean square error (MMSE) between the signal subspace and the space spanned by the practical steering vectors. The practical steering matrix of each Doppler bin can be constructed according to the Doppler spectrum. Compared with the time-domain correlation method, the proposed method no longer depends on the accuracy of the Doppler centroid estimation. Besides, compared with the orthogonal subspace method, the proposed method has the advantage of robustness under the condition of large samples by using the diagonal loading technique. To evaluate the performance, the results of simulation data and the real data acquired by the GF-3 dual-channel SAR system demonstrate that the proposed method has higher accuracy and more robustness than the conventional methods, especially in the case of low SNRs and high non-uniformity. Full article
Show Figures

Figure 1

Technical Note
Micro-Motion Parameter Extraction for Ballistic Missile with Wideband Radar Using Improved Ensemble EMD Method
Remote Sens. 2021, 13(17), 3545; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13173545 - 06 Sep 2021
Abstract
Micro-motion parameters extraction is crucial in recognizing ballistic missiles with a wideband radar. It is known that the phase-derived range (PDR) method can provide a sub-wavelength level accuracy. However, it is sensitive and unstable when the signal-to-noise ratio (SNR) is low. In this [...] Read more.
Micro-motion parameters extraction is crucial in recognizing ballistic missiles with a wideband radar. It is known that the phase-derived range (PDR) method can provide a sub-wavelength level accuracy. However, it is sensitive and unstable when the signal-to-noise ratio (SNR) is low. In this paper, an improved PDR method is proposed to reduce the impacts of low SNRs. First, the high range resolution profile (HRRP) is divided into a series of segments so that each segment contains a single scattering point. Then, the peak values of each segment are viewed as non-stationary signals, which are further decomposed into a series of intrinsic mode functions (IMFs) with different energy, using the ensemble empirical mode decomposition with the complementary adaptive noise (EEMDCAN) method. In the EEMDCAN decomposition, positive and negative adaptive noise pairs are added to each IMF layer to effectively eliminate the mode-mixing phenomenon that exists in the original empirical mode decomposition (EMD) method. An energy threshold is designed to select proper IMFs to reconstruct the envelop for high estimation accuracy and low noise effects. Finally, the least-square algorithm is used to do the ambiguous phases unwrapping to obtain the micro-curve, which can be further used to estimate the micro-motion parameters of the warhead. Simulation results show that the proposed method performs well with SNR at −5 dB with an accuracy level of sub-wavelength. Full article
Show Figures

Figure 1

Article
A Spatial Variant Motion Compensation Algorithm for High-Monofrequency Motion Error in Mini-UAV-Based BiSAR Systems
Remote Sens. 2021, 13(17), 3544; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13173544 - 06 Sep 2021
Abstract
High-frequency motion errors can drastically decrease the image quality in mini-unmanned-aerial-vehicle (UAV)-based bistatic synthetic aperture radar (BiSAR), where the spatial variance is much more complex than that in monoSAR. High-monofrequency motion error is a special BiSAR case in which the different motion errors [...] Read more.
High-frequency motion errors can drastically decrease the image quality in mini-unmanned-aerial-vehicle (UAV)-based bistatic synthetic aperture radar (BiSAR), where the spatial variance is much more complex than that in monoSAR. High-monofrequency motion error is a special BiSAR case in which the different motion errors from transmitters and receivers lead to the formation of monofrequency motion error. Furthermore, neither of the classic processors, BiSAR and monoSAR, can compensate for the coupled high-monofrequency motion errors. In this paper, a spatial variant motion compensation algorithm for high-monofrequency motion errors is proposed. First, the bistatic rotation error model that causes high-monofrequency motion error is re-established to account for the bistatic spatial variance of image formation. Second, the corresponding parameters of error model nonlinear gradient are obtained by the joint estimation of subimages. Third, the bistatic spatial variance can be adaptively compensated for based on the error of the nonlinear gradient through contour projection. It is suggested based on the simulation and experimental results that the proposed algorithm can effectively compensate for high-monofrequency motion error in mini-UAV-based BiSAR system conditions. Full article
Show Figures

Graphical abstract

Article
An Improved Cloud Gap-Filling Method for Longwave Infrared Land Surface Temperatures through Introducing Passive Microwave Techniques
Remote Sens. 2021, 13(17), 3522; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13173522 - 05 Sep 2021
Abstract
Satellite-derived land surface temperature (LST) data are most commonly observed in the longwave infrared (LWIR) spectral region. However, such data suffer frequent gaps in coverage caused by cloud cover. Filling these ‘cloud gaps’ usually relies on statistical re-constructions using proximal clear sky LST [...] Read more.
Satellite-derived land surface temperature (LST) data are most commonly observed in the longwave infrared (LWIR) spectral region. However, such data suffer frequent gaps in coverage caused by cloud cover. Filling these ‘cloud gaps’ usually relies on statistical re-constructions using proximal clear sky LST pixels, whilst this is often a poor surrogate for shadowed LSTs insulated under cloud. Another solution is to rely on passive microwave (PM) LST data that are largely unimpeded by cloud cover impacts, the quality of which, however, is limited by the very coarse spatial resolution typical of PM signals. Here, we combine aspects of these two approaches to fill cloud gaps in the LWIR-derived LST record, using Kenya (East Africa) as our study area. The proposed “cloud gap-filling” approach increases the coverage of daily Aqua MODIS LST data over Kenya from <50% to >90%. Evaluations were made against the in situ and SEVIRI-derived LST data respectively, revealing root mean square errors (RMSEs) of 2.6 K and 3.6 K for the proposed method by mid-day, compared with RMSEs of 4.3 K and 6.7 K for the conventional proximal-pixel-based statistical re-construction method. We also find that such accuracy improvements become increasingly apparent when the total cloud cover residence time increases in the morning-to-noon time frame. At mid-night, cloud gap-filling performance is also better for the proposed method, though the RMSE improvement is far smaller (<0.3 K) than in the mid-day period. The results indicate that our proposed two-step cloud gap-filling method can improve upon performances achieved by conventional methods for cloud gap-filling and has the potential to be scaled up to provide data at continental or global scales as it does not rely on locality-specific knowledge or datasets. Full article
Show Figures

Graphical abstract

Article
Observation of Surface Displacement Associated with Rapid Urbanization and Land Creation in Lanzhou, Loess Plateau of China with Sentinel-1 SAR Imagery
Remote Sens. 2021, 13(17), 3472; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13173472 - 01 Sep 2021
Abstract
Lanzhou is one of the cities with the higher number of civil engineering projects for mountain excavation and city construction (MECC) on the China’s Loess Plateau. As a result, the city is suffering from severe surface displacement, which is posing an increasing threat [...] Read more.
Lanzhou is one of the cities with the higher number of civil engineering projects for mountain excavation and city construction (MECC) on the China’s Loess Plateau. As a result, the city is suffering from severe surface displacement, which is posing an increasing threat to the safety of the buildings. However, up to date, there is no comprehensive and high-precision displacement map to characterize the spatiotemporal surface displacement patterns in the city of Lanzhou. In this study, satellite-based observations, including optical remote sensing and synthetic aperture radar (SAR) sensing, were jointly used to characterize the landscape and topography changes in Lanzhou between 1997 and 2020 and investigate the spatiotemporal patterns of the surface displacement associated with the large-scale MECC projects from 2015 December to March 2021. First, we retrieved the landscape changes in Lanzhou during the last 23 years using multi-temporal optical remote sensing images. Results illustrate that the landscape in local areas of Lanzhou has been dramatically changed as a result of the large-scale MECC projects and rapid urbanization. Then, we optimized the ordinary time series InSAR processing procedure by a “dynamic estimation of digital elevation model (DEM) errors” step added before displacement inversion to avoid the false displacement signals caused by DEM errors. The DEM errors and the high-precision surface displacement maps between December 2015 and March 2021 were calculated with 124 ascending and 122 descending Sentinel-1 SAR images. By combining estimated DEM errors and optical images, we detected and mapped historical MECC areas in the study area since 2000, retrieved the excavated and filling areas of the MECC projects, and evaluated their areas and volumes as well as the thickness of the filling loess. Results demonstrated that the area and volume of the excavated regions were basically equal to that of the filling regions, and the maximum thickness of the filling loess was greater than 90 m. Significant non-uniform surface displacements were observed in the filling regions of the MECC projects, with the maximum cumulative displacement lower than −40 cm. 2D displacement results revealed that surface displacement associated with the MECC project was dominated by settlements. From the correlation analysis between the displacement and the filling thickness, we found that the displacement magnitude was positively correlated with the thickness of the filling loess. This finding indicated that the compaction and consolidation process of the filling loess largely dominated the surface displacement. Our findings are of paramount importance for the urban planning and construction on the Loess Plateau region in which large-scale MECC projects are being developed. Full article
Show Figures

Graphical abstract

Article
Cross-Domain Scene Classification Based on a Spatial Generalized Neural Architecture Search for High Spatial Resolution Remote Sensing Images
Remote Sens. 2021, 13(17), 3460; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13173460 - 01 Sep 2021
Abstract
By labelling high spatial resolution (HSR) images with specific semantic classes according to geographical properties, scene classification has been proven to be an effective method for HSR remote sensing image semantic interpretation. Deep learning is widely applied in HSR remote sensing scene classification. [...] Read more.
By labelling high spatial resolution (HSR) images with specific semantic classes according to geographical properties, scene classification has been proven to be an effective method for HSR remote sensing image semantic interpretation. Deep learning is widely applied in HSR remote sensing scene classification. Most of the scene classification methods based on deep learning assume that the training datasets and the test datasets come from the same datasets or obey similar feature distributions. However, in practical application scenarios, it is difficult to guarantee this assumption. For new datasets, it is time-consuming and labor-intensive to repeat data annotation and network design. The neural architecture search (NAS) can automate the process of redesigning the baseline network. However, traditional NAS lacks the generalization ability to different settings and tasks. In this paper, a novel neural network search architecture framework—the spatial generalization neural architecture search (SGNAS) framework—is proposed. This model applies the NAS of spatial generalization to cross-domain scene classification of HSR images to bridge the domain gap. The proposed SGNAS can automatically search the architecture suitable for HSR image scene classification and possesses network design principles similar to the manually designed networks. To obtain a simple and low-dimensional search space, the traditional NAS search space was optimized and the human-the-loop method was used. To extend the optimized search space to different tasks, the search space was generalized. The experimental results demonstrate that the network searched by the SGNAS framework with good generalization ability displays its effectiveness for cross-domain scene classification of HSR images, both in accuracy and time efficiency. Full article
Show Figures

Figure 1

Article
High Speed Maneuvering Platform Squint TOPS SAR Imaging Based on Local Polar Coordinate and Angular Division
Remote Sens. 2021, 13(16), 3329; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163329 - 23 Aug 2021
Abstract
This paper proposes an imaging algorithm for synthetic aperture radar (SAR) mounted on a high-speed maneuvering platform with squint terrain observation by progressive scan mode. To overcome the mismatch between range model and the signal after range walk correction, the range history is [...] Read more.
This paper proposes an imaging algorithm for synthetic aperture radar (SAR) mounted on a high-speed maneuvering platform with squint terrain observation by progressive scan mode. To overcome the mismatch between range model and the signal after range walk correction, the range history is calculated in local polar format. The Doppler ambiguity is resolved by nonlinear derotation and zero-padding. The recovered signal is divided into several blocks in Doppler according to the angular division. Keystone transform is used to remove the space-variant range cell migration (RCM) components. Thus, the residual RCM terms can be compensated by a unified phase function. Frequency domain perturbation terms are introduced to correct the space-variant Doppler chirp rate term. The focusing parameters are calculated according to the scene center of each angular block and the signal of each block can be processed in parallel. The image of each block is focused in range-Doppler domain. After the geometric correction, the final focused image can be obtained by directly combined the images of all angular blocks. Simulated SAR data has verified the effectiveness of the proposed algorithm. Full article
Show Figures

Figure 1

Article
Investigation of Thundercloud Features in Different Regions
Remote Sens. 2021, 13(16), 3216; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163216 - 13 Aug 2021
Abstract
A comparison of thundercloud characteristics in different regions of the world was conducted. The clouds studied developed in India, China and in two regions of Russia. Several field projects were discussed. Cloud characteristics were measured by weather radars, the SEVERI instrument installed on [...] Read more.
A comparison of thundercloud characteristics in different regions of the world was conducted. The clouds studied developed in India, China and in two regions of Russia. Several field projects were discussed. Cloud characteristics were measured by weather radars, the SEVERI instrument installed on board of the Meteosat satellite, and lightning detection systems. The statistical characteristics of the clouds were tabulated from radar scans and correlated with lightning observations. Thunderclouds in India differ significantly from those observed in other regions. The relationships among lightning strike frequency, supercooled cloud volume, and precipitation intensity were analyzed. In most cases, high correlation was observed between lightning strike frequency and supercooled volume. Full article
Show Figures

Figure 1

Article
Unsupervised Reconstruction of Sea Surface Currents from AIS Maritime Traffic Data Using Trainable Variational Models
Remote Sens. 2021, 13(16), 3162; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163162 - 10 Aug 2021
Abstract
The estimation of ocean dynamics is a key challenge for applications ranging from climate modeling to ship routing. State-of-the-art methods relying on satellite-derived altimetry data can hardly resolve spatial scales below ∼100 km. In this work we investigate the relevance of AIS data [...] Read more.
The estimation of ocean dynamics is a key challenge for applications ranging from climate modeling to ship routing. State-of-the-art methods relying on satellite-derived altimetry data can hardly resolve spatial scales below ∼100 km. In this work we investigate the relevance of AIS data streams as a new mean for the estimation of the surface current velocities. Using a physics-informed observation model, we propose to solve the associated the ill-posed inverse problem using a trainable variational formulation. The latter exploits variational auto-encoders coupled with neural ODE to represent sea surface dynamics. We report numerical experiments on a real AIS dataset off South Africa in a highly dynamical ocean region. They support the relevance of the proposed learning-based AIS-driven approach to significantly improve the reconstruction of sea surface currents compared with state-of-the-art methods, including altimetry-based ones. Full article
Show Figures

Figure 1

Article
Estimation of Evapotranspiration and Its Components across China Based on a Modified Priestley–Taylor Algorithm Using Monthly Multi-Layer Soil Moisture Data
Remote Sens. 2021, 13(16), 3118; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163118 - 06 Aug 2021
Abstract
Although soil moisture (SM) is an important constraint factor of evapotranspiration (ET), the majority of the satellite-driven ET models do not include SM observations, especially the SM at different depths, since its spatial and temporal distribution is difficult to [...] Read more.
Although soil moisture (SM) is an important constraint factor of evapotranspiration (ET), the majority of the satellite-driven ET models do not include SM observations, especially the SM at different depths, since its spatial and temporal distribution is difficult to obtain. Based on monthly three-layer SM data at a 0.25° spatial resolution determined from multi-sources, we updated the original Priestley Taylor–Jet Propulsion Laboratory (PT-JPL) algorithm to the Priestley Taylor–Soil Moisture Evapotranspiration (PT-SM ET) algorithm by incorporating SM control into soil evaporation (Es) and canopy transpiration (T). Both algorithms were evaluated using 17 eddy covariance towers across different biomes of China. The PT-SM ET model shows increased R2, NSE and reduced RMSE, Bias, with more improvements occurring in water-limited regions. SM incorporation into T enhanced ET estimates by increasing R2 and NSE by 4% and 18%, respectively, and RMSE and Bias were respectively reduced by 34% and 7 mm. Moreover, we applied the two ET algorithms to the whole of China and found larger increases in T and Es in the central, northeastern, and southern regions of China when using the PT-SM algorithm compared with the original algorithm. Additionally, the estimated mean annual ET increased from the northwest to the southeast. The SM constraint resulted in higher transpiration estimate and lower evaporation estimate. Es was greatest in the northwest arid region, interception was a large fraction in some rainforests, and T was dominant in most other regions. Further improvements in the estimation of ET components at high spatial and temporal resolution are likely to lead to a better understanding of the water movement through the soil–plant–atmosphere continuum. Full article
Show Figures

Figure 1

Technical Note
Reason Analysis of the Jiwenco Glacial Lake Outburst Flood (GLOF) and Potential Hazard on the Qinghai-Tibetan Plateau
Remote Sens. 2021, 13(16), 3114; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163114 - 06 Aug 2021
Abstract
Glacial lake outburst flood (GLOF) is one of the major natural disasters in the Qinghai-Tibetan Plateau (QTP). On 25 June 2020, the outburst of the Jiwenco Glacial Lake (JGL) in the upper reaches of Nidu river in Jiari County of the QTP reached [...] Read more.
Glacial lake outburst flood (GLOF) is one of the major natural disasters in the Qinghai-Tibetan Plateau (QTP). On 25 June 2020, the outburst of the Jiwenco Glacial Lake (JGL) in the upper reaches of Nidu river in Jiari County of the QTP reached the downstream Niwu Township on 26 June, causing damage to many bridges, roads, houses, and other infrastructure, and disrupting telecommunications for several days. Based on radar and optical image data, the evolution of the JGL before and after the outburst was analyzed. The results showed that the area and storage capacity of the JGL were 0.58 square kilometers and 0.071 cubic kilometers, respectively, before the outburst (29 May), and only 0.26 square kilometers and 0.017 cubic kilometers remained after the outburst (27 July). The outburst reservoir capacity was as high as 5.4 million cubic meters. The main cause of the JGL outburst was the heavy precipitation process before outburst and the ice/snow/landslides entering the lake was the direct inducement. The outburst flood/debris flow disaster also led to many sections of the river and buildings in Niwu Township at high risk. Therefore, it is urgent to pay more attention to glacial lake outburst floods and other low-probability disasters, and early real-time engineering measures should be taken to minimize their potential impacts. Full article
Show Figures

Figure 1

Article
A New 32-Day Average-Difference Method for Calculating Inter-Sensor Calibration Radiometric Biases between SNPP and NOAA-20 Instruments within ICVS Framework
Remote Sens. 2021, 13(16), 3079; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163079 - 05 Aug 2021
Abstract
Two existing double-difference (DD) methods, using either a 3rdSensor or Radiative Transfer Modeling (RTM) as a transfer, are applicable primarily for limited regions and channels, and, thus critical in capturing inter-sensor calibration radiometric bias features. A supplementary method is also desirable [...] Read more.
Two existing double-difference (DD) methods, using either a 3rdSensor or Radiative Transfer Modeling (RTM) as a transfer, are applicable primarily for limited regions and channels, and, thus critical in capturing inter-sensor calibration radiometric bias features. A supplementary method is also desirable for estimating inter-sensor calibration biases at the window and lower sounding channels where the DD methods have non-negligible errors. In this study, using the Suomi National Polar-orbiting Partnership (SNPP) and Joint Polar Satellite System (JPSS)-1 (alias NOAA-20) as an example, we present a new inter-sensor bias statistical method by calculating 32-day averaged differences (32D-AD) of radiometric measurements between the same instrument onboard two satellites. In the new method, a quality control (QC) scheme using one-sigma (for radiance difference), or two-sigma (for radiance) thresholds are established to remove outliers that are significantly affected by diurnal biases within the 32-day temporal coverage. The performance of the method is assessed by applying it to estimate inter-sensor calibration radiometric biases for four instruments onboard SNPP and NOAA-20, i.e., Advanced Technology Microwave Sounder (ATMS), Cross-track Infrared Sounder (CrIS), Nadir Profiler (NP) within the Ozone Mapping and Profiler Suite (OMPS), and Visible Infrared Imaging Radiometer Suite (VIIRS). Our analyses indicate that the globally-averaged inter-sensor differences using the 32D-AD method agree with those using the existing DD methods for available channels, with margins partially due to remaining diurnal errors. In addition, the new method shows its capability in assessing zonal mean features of inter-sensor calibration biases at upper sounding channels. It also detects the solar intrusion anomaly occurring on NOAA-20 OMPS NP at wavelengths below 300 nm over the Northern Hemisphere. Currently, the new method is being operationally adopted to monitor the long-term trends of (globally-averaged) inter-sensor calibration radiometric biases at all channels for the above sensors in the Integrated Calibration/Validation System (ICVS). It is valuable in demonstrating the quality consistencies of the SDR data at the four instruments between SNPP and NOAA-20 in long-term statistics. The methodology is also applicable for other POES cross-sensor calibration bias assessments with minor changes. Full article
Show Figures

Graphical abstract

Article
Interferometric Phase Error Analysis and Compensation in GNSS-InSAR: A Case Study of Structural Monitoring
Remote Sens. 2021, 13(15), 3041; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13153041 - 03 Aug 2021
Abstract
Global navigation satellite system (GNSS)-based synthetic aperture radar interferometry (InSAR) employs GNSS satellites as transmitters of opportunity and a fixed receiver with two channels, i.e., direct wave and echo, on the ground. The repeat-pass concept is adopted in GNSS-based InSAR to retrieve the [...] Read more.
Global navigation satellite system (GNSS)-based synthetic aperture radar interferometry (InSAR) employs GNSS satellites as transmitters of opportunity and a fixed receiver with two channels, i.e., direct wave and echo, on the ground. The repeat-pass concept is adopted in GNSS-based InSAR to retrieve the deformation of the target area, and it has inherited advantages from the GNSS system, such as a short repeat-pass period and multi-angle retrieval. However, several interferometric phase errors, such as inter-channel and atmospheric errors, are introduced into GNSS-based InSAR, which seriously decreases the accuracy of the retrieved deformation. In this paper, a deformation retrieval algorithm is presented to assess the compensation of the interferometric phase errors in GNSS-based InSAR. Firstly, the topological phase error was eliminated based on accurate digital elevation model (DEM) information from a light detection and ranging (lidar) system. Secondly, the inter-channel phase error was compensated, using direct wave in the echo channel, i.e., a back lobe signal. Finally, by modeling the atmospheric phase, the residual atmospheric phase error was compensated for. This is the first realization of the deformation detection of urban scenes using a GNSS-based system, and the results suggest the effectiveness of the phase error compensation algorithm. Full article
Show Figures

Figure 1

Article
A Second-Order Time-Difference Position Constrained Reduced-Dynamic Technique for the Precise Orbit Determination of LEOs Using GPS
Remote Sens. 2021, 13(15), 3033; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13153033 - 02 Aug 2021
Abstract
In this paper, we propose a new reduced-dynamic (RD) method by introducing the second-order time-difference position (STP) as additional pseudo-observations (named the RD_STP method) for the precise orbit determination (POD) of low Earth orbiters (LEOs) from GPS observations. Theoretical and numerical analyses show [...] Read more.
In this paper, we propose a new reduced-dynamic (RD) method by introducing the second-order time-difference position (STP) as additional pseudo-observations (named the RD_STP method) for the precise orbit determination (POD) of low Earth orbiters (LEOs) from GPS observations. Theoretical and numerical analyses show that the accuracies of integrating the STPs of LEOs at 30 s intervals are better than 0.01 m when the forces (<10−5 ms−2) acting on the LEOs are ignored. Therefore, only using the Earth’s gravity model is good enough for the proposed RD_STP method. All unmodeled dynamic models (e.g., luni-solar gravitation, tide forces) are treated as the error sources of the STP pseudo-observation. In addition, there are no pseudo-stochastic orbit parameters to be estimated in the RD_STP method. Finally, we use the RD_STP method to process 15 days of GPS data from the GOCE mission. The results show that the accuracy of the RD_STP solution is more accurate and smoother than the kinematic solution in nearly polar and equatorial regions, and consistent with the RD solution. The 3D RMS of the differences between the RD_STP and RD solutions is 1.93 cm for 1 s sampling. This indicates that the proposed method has a performance comparable to the RD method, and could be an alternative for the POD of LEOs. Full article
Show Figures

Figure 1

Article
Moving Target Shadow Analysis and Detection for ViSAR Imagery
Remote Sens. 2021, 13(15), 3012; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13153012 - 31 Jul 2021
Cited by 1
Abstract
The video synthetic aperture radar (ViSAR) is a new application in radar techniques. ViSAR provides high- or moderate-resolution SAR images with a faster frame rate, which permits the detection of the dynamic changes in the interested area. A moving target with moderate velocity [...] Read more.
The video synthetic aperture radar (ViSAR) is a new application in radar techniques. ViSAR provides high- or moderate-resolution SAR images with a faster frame rate, which permits the detection of the dynamic changes in the interested area. A moving target with moderate velocity can be detected by shadow detection in ViSAR. This paper analyses the frame rate and the shadow feature, discusses the velocity limitation of ViSAR moving target shadow detection and quantitatively gives the expression of velocity limitation. Furthermore, a fast factorized back projection (FFBP) based SAR video formation method and a shadow-based ground moving target detection method are proposed to generate SAR videos and detect the moving target shadow. The experimental results with simulated data prove the validity and feasibility of the proposed quantitative analysis and the proposed methods. Full article
Show Figures

Graphical abstract

Article
Three-Dimensional Interferometric ISAR Imaging Algorithm Based on Cross Coherence Processing
Sensors 2021, 21(15), 5073; https://0-doi-org.brum.beds.ac.uk/10.3390/s21155073 - 27 Jul 2021
Abstract
Interferometric inverse synthetic aperture radar (InISAR) has received significant attention in three-dimensional (3D) imaging due to its applications in target classification and recognition. The traditional two-dimensional (2D) ISAR image can be interpreted as a filtered projection of a 3D target’s reflectivity function onto [...] Read more.
Interferometric inverse synthetic aperture radar (InISAR) has received significant attention in three-dimensional (3D) imaging due to its applications in target classification and recognition. The traditional two-dimensional (2D) ISAR image can be interpreted as a filtered projection of a 3D target’s reflectivity function onto an image plane. Such a plane usually depends on unknown radar-target geometry and dynamics, which results in difficulty interpreting an ISAR image. Using the L-shape InISAR imaging system, this paper proposes a novel 3D target reconstruction algorithm based on Dechirp processing and 2D interferometric ISAR imaging, which can jointly estimate the effective rotation vector and the height of scattering center. In order to consider only the areas of the target with meaningful interferometric phase and mitigate the effects of noise and sidelobes, a special cross-channel coherence-based detector (C3D) is introduced. Compared to the multichannel CLEAN technique, advantages of the C3D include the following: (1) the computational cost is lower without complex iteration and (2) the proposed method, which can avoid propagating errors, is more suitable for a target with multi-scattering points. Moreover, misregistration and its influence on target reconstruction are quantitatively discussed. Theoretical analysis and numerical simulations confirm the suitability of the algorithm for 3D imaging of multi-scattering point targets with high efficiency and demonstrate the reliability and effectiveness of the proposed method in the presence of noise. Full article
Show Figures

Figure 1

Article
Cloud Cover throughout All the Paddy Rice Fields in Guangdong, China: Impacts on Sentinel 2 MSI and Landsat 8 OLI Optical Observations
Remote Sens. 2021, 13(15), 2961; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13152961 - 27 Jul 2021
Abstract
Cloud cover hinders the effective use of vegetation indices from optical satellite-acquired imagery in cloudy agricultural production areas, such as Guangdong, a subtropical province in southern China which supports two-season rice production. The number of cloud-free observations for the earth-orbiting optical satellite sensors [...] Read more.
Cloud cover hinders the effective use of vegetation indices from optical satellite-acquired imagery in cloudy agricultural production areas, such as Guangdong, a subtropical province in southern China which supports two-season rice production. The number of cloud-free observations for the earth-orbiting optical satellite sensors must be determined to verify how much their observations are affected by clouds. This study determines the quantified wide-ranging impact of clouds on optical satellite observations by mapping the annual total observations (ATOs), annual cloud-free observations (ACFOs), monthly cloud-free observations (MCFOs) maps, and acquisition probability (AP) of ACFOs for the Sentinel 2 (2017–2019) and Landsat 8 (2014–2019) for all the paddy rice fields in Guangdong province (APRFG), China. The ATOs of Landsat 8 showed relatively stable observations compared to the Sentinel 2, and the per-field ACFOs of Sentinel 2 and Landsat 8 were unevenly distributed. The MCFOs varied on a monthly basis, but in general, the MCFOs were greater between August and December than between January and July. Additionally, the AP of usable ACFOs with 52.1% (Landsat 8) and 47.7% (Sentinel 2) indicated that these two satellite sensors provided markedly restricted observation capability for rice in the study area. Our findings are particularly important and useful in the tropics and subtropics, and the analysis has described cloud cover frequency and pervasiveness throughout different portions of the rice growing season, providing insight into how rice monitoring activities by using Sentinel 2 and Landsat 8 imagery in Guangdong would be impacted by cloud cover. Full article
Show Figures

Figure 1

Article
A Fast Aircraft Detection Method for SAR Images Based on Efficient Bidirectional Path Aggregated Attention Network
Remote Sens. 2021, 13(15), 2940; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13152940 - 27 Jul 2021
Abstract
In aircraft detection from synthetic aperture radar (SAR) images, there are several major challenges: the shattered features of the aircraft, the size heterogeneity and the interference of a complex background. To address these problems, an Efficient Bidirectional Path Aggregation Attention Network (EBPA2N) is [...] Read more.
In aircraft detection from synthetic aperture radar (SAR) images, there are several major challenges: the shattered features of the aircraft, the size heterogeneity and the interference of a complex background. To address these problems, an Efficient Bidirectional Path Aggregation Attention Network (EBPA2N) is proposed. In EBPA2N, YOLOv5s is used as the base network and then the Involution Enhanced Path Aggregation (IEPA) module and Effective Residual Shuffle Attention (ERSA) module are proposed and systematically integrated to improve the detection accuracy of the aircraft. The IEPA module aims to effectively extract advanced semantic and spatial information to better capture multi-scale scattering features of aircraft. Then, the lightweight ERSA module further enhances the extracted features to overcome the interference of complex background and speckle noise, so as to reduce false alarms. To verify the effectiveness of the proposed network, Gaofen-3 airports SAR data with 1 m resolution are utilized in the experiment. The detection rate and false alarm rate of our EBPA2N algorithm are 93.05% and 4.49%, respectively, which is superior to the latest networks of EfficientDet-D0 and YOLOv5s, and it also has an advantage of detection speed. Full article
Show Figures

Figure 1

Article
Hyperspectral and Multispectral Image Fusion Using Coupled Non-Negative Tucker Tensor Decomposition
Remote Sens. 2021, 13(15), 2930; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13152930 - 26 Jul 2021
Abstract
Fusing a low spatial resolution hyperspectral image (HSI) with a high spatial resolution multispectral image (MSI), aiming to produce a super-resolution hyperspectral image, has recently attracted increasing research interest. In this paper, a novel approach based on coupled non-negative tensor decomposition is proposed. [...] Read more.
Fusing a low spatial resolution hyperspectral image (HSI) with a high spatial resolution multispectral image (MSI), aiming to produce a super-resolution hyperspectral image, has recently attracted increasing research interest. In this paper, a novel approach based on coupled non-negative tensor decomposition is proposed. The proposed method performs a tucker tensor factorization of a low resolution hyperspectral image and a high resolution multispectral image under the constraint of non-negative tensor decomposition (NTD). The conventional matrix factorization methods essentially lose spatio-spectral structure information when stacking the 3D data structure of a hyperspectral image into a matrix form. Moreover, the spectral, spatial, or their joint structural features have to be imposed from the outside as a constraint to well pose the matrix factorization problem. The proposed method has the advantage of preserving the spatio-spectral structure of hyperspectral images. In this paper, the NTD is directly imposed on the coupled tensors of the HSI and MSI. Hence, the intrinsic spatio-spectral structure of the HSI is represented without loss, and spatial and spectral information can be interdependently exploited. Furthermore, multilinear interactions of different modes of the HSIs can be exactly modeled with the core tensor of the Tucker tensor decomposition. The proposed method is straightforward and easy to implement. Unlike other state-of-the-art approaches, the complexity of the proposed approach is linear with the size of the HSI cube. Experiments on two well-known datasets give promising results when compared with some recent methods from the literature. Full article
Show Figures

Figure 1

Article
A Building Roof Identification CNN Based on Interior-Edge-Adjacency Features Using Hyperspectral Imagery
Remote Sens. 2021, 13(15), 2927; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13152927 - 26 Jul 2021
Abstract
Hyperspectral remote sensing can obtain both spatial and spectral information of ground objects. It is an important prerequisite for a hyperspectral remote sensing application to make good use of spectral and image features. Therefore, we improved the Convolutional Neural Network (CNN) model by [...] Read more.
Hyperspectral remote sensing can obtain both spatial and spectral information of ground objects. It is an important prerequisite for a hyperspectral remote sensing application to make good use of spectral and image features. Therefore, we improved the Convolutional Neural Network (CNN) model by extracting interior-edge-adjacency features of building roof and proposed a new CNN model with a flexible structure: Building Roof Identification CNN (BRI-CNN). Our experimental results demonstrated that the BRI-CNN can not only extract interior-edge-adjacency features of building roof, but also change the weight of these different features during the training process, according to selected samples. Our approach was tested using the Indian Pines (IP) data set and our comparative study indicates that the BRI-CNN model achieves at least 0.2% higher overall accuracy than that of the capsule network model, and more than 2% than that of CNN models. Full article
Show Figures

Graphical abstract

Article
True-Color Reconstruction Based on Hyperspectral LiDAR Echo Energy
Remote Sens. 2021, 13(15), 2854; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13152854 - 21 Jul 2021
Abstract
With the development of remote sensing technology, the simultaneous acquisition of 3D point cloud and color information has become the constant goal for scientific research and commercial applications in this field. However, since radar echo data in practice refer to the value of [...] Read more.
With the development of remote sensing technology, the simultaneous acquisition of 3D point cloud and color information has become the constant goal for scientific research and commercial applications in this field. However, since radar echo data in practice refer to the value of the spectral channel and its corresponding energy, it is still impossible to obtain accurate tristimulus values of the point through color integral calculation after traditional normalization and multispectral correction. Furthermore, the reflectance of the target, the laser transmission power and other factors lead to the problems of no echo energy or weak echo energy in some bands of the visible spectrum, which further leads to large chromatic difference compared to the color calculated from the spectral reflectance of standard color card. In response to these problems, the hyperbolic tangent spectrum correction model with parameters is proposed for the spectrum correction of the acquired hyperspectral LiDAR in the 470–700 nm band. In addition, the improved gradient boosting decision tree sequence prediction algorithm is proposed for the reconstruction of missing spectrum in the 400–470 nm band where the echo energy is weak and missing. Experimental results show that there is relatively small chromatic difference between the obtained spectral information after correction and reconstruction and the spectrum of standard color card, achieving the purpose of true color reconstruction. Full article
Show Figures

Figure 1

Article
Evaluation of Eight Global Precipitation Datasets in Hydrological Modeling
Remote Sens. 2021, 13(14), 2831; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13142831 - 19 Jul 2021
Abstract
The number of global precipitation datasets (PPs) is on the rise and they are commonly used for hydrological applications. A comprehensive evaluation on their performance in hydrological modeling is required to improve their performance. This study comprehensively evaluates the performance of eight widely [...] Read more.
The number of global precipitation datasets (PPs) is on the rise and they are commonly used for hydrological applications. A comprehensive evaluation on their performance in hydrological modeling is required to improve their performance. This study comprehensively evaluates the performance of eight widely used PPs in hydrological modeling by comparing with gauge-observed precipitation for a large number of catchments. These PPs include the Global Precipitation Climatology Centre (GPCC), Climate Hazards Group Infrared Precipitation with Station dataset (CHIRPS) V2.0, Climate Prediction Center Morphing Gauge Blended dataset (CMORPH BLD), Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks Climate Data Record (PERSIANN CDR), Tropical Rainfall Measuring Mission multi-satellite Precipitation Analysis 3B42RT (TMPA 3B42RT), Multi-Source Weighted-Ensemble Precipitation (MSWEP V2.0), European Center for Medium-range Weather Forecast Reanalysis 5 (ERA5) and WATCH Forcing Data methodology applied to ERA-Interim Data (WFDEI). Specifically, the evaluation is conducted over 1382 catchments in China, Europe and North America for the 1998-2015 period at a daily temporal scale. The reliabilities of PPs in hydrological modeling are evaluated with a calibrated hydrological model using rain gauge observations. The effectiveness of PPs-specific calibration and bias correction in hydrological modeling performances are also investigated for all PPs. The results show that: (1) compared with the rain gauge observations, GPCC provides the best performance overall, followed by MSWEP V2.0; (2) among the eight PPs, the ones incorporating daily gauge data (MSWEP V2.0 and CMORPH BLD) provide superior hydrological performance, followed by those incorporating 5-day (CHIRPS V2.0) and monthly (TMPA 3B42RT, WFDEI, and PERSIANN CDR) gauge data. MSWEP V2.0 and CMORPH BLD perform better than GPCC, underscoring the effectiveness of merging multiple satellite and reanalysis datasets; (3) regionally, all PPs exhibit better performances in temperate regions than in arid or topographically complex mountainous regions; and (4) PPs-specific calibration and bias correction both can improve the streamflow simulations for all eight PPs in terms of the Nash and Sutcliffe efficiency and the absolute bias. This study provides insights on the reliabilities of PPs in hydrological modeling and the approaches to improve their performance, which is expected to provide a reference for the applications of global precipitation datasets. Full article
Show Figures

Figure 1

Article
Retrieving Sun-Induced Chlorophyll Fluorescence from Hyperspectral Data with TanSat Satellite
Sensors 2021, 21(14), 4886; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144886 - 18 Jul 2021
Cited by 1
Abstract
A series of algorithms for satellite retrievals of sun-induced chlorophyll fluorescence (SIF) have been developed and applied to different sensors. However, research on SIF retrieval using hyperspectral data is performed in narrow spectral windows, assuming that SIF remains constant. In this paper, based [...] Read more.
A series of algorithms for satellite retrievals of sun-induced chlorophyll fluorescence (SIF) have been developed and applied to different sensors. However, research on SIF retrieval using hyperspectral data is performed in narrow spectral windows, assuming that SIF remains constant. In this paper, based on the singular vector decomposition (SVD) technique, we present an approach for retrieving SIF, which can be applied to remotely sensed data with ultra-high spectral resolution and in a broad spectral window without assuming that the SIF remains constant. The idea is to combine the first singular vector, the pivotal information of the non-fluorescence spectrum, with the low-frequency contribution of the atmosphere, plus a linear combination of the remaining singular vectors to express the non-fluorescence spectrum. Subject to instrument settings, the retrieval was performed within a spectral window of approximately 7 nm that contained only Fraunhofer lines. In our retrieval, hyperspectral data of the O2-A band from the first Chinese carbon dioxide observation satellite (TanSat) was used. The Bayesian Information Criterion (BIC) was introduced to self-adaptively determine the number of free parameters and reduce retrieval noise. SIF retrievals were compared with TanSat SIF and OCO-2 SIF. The results showed good consistency and rationality. A sensitivity analysis was also conducted to verify the performance of this approach. To summarize, the approach would provide more possibilities for retrieving SIF from hyperspectral data. Full article
Show Figures

Figure 1

Article
Evaluation of Precise Microwave Ranging Technology for Low Earth Orbit Formation Missions with Beidou Time-Synchronize Receiver
Sensors 2021, 21(14), 4883; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144883 - 17 Jul 2021
Abstract
In this study, submillimeter level accuracy K-band microwave ranging (MWR) equipment is demonstrated, aiming to verify the detection of the Earth’s gravity field (EGF) and digital elevation models (DEM), through spacecraft formation flying (SFF) in low Earth orbit (LEO). In particular, this paper [...] Read more.
In this study, submillimeter level accuracy K-band microwave ranging (MWR) equipment is demonstrated, aiming to verify the detection of the Earth’s gravity field (EGF) and digital elevation models (DEM), through spacecraft formation flying (SFF) in low Earth orbit (LEO). In particular, this paper introduces in detail an integrated BeiDou III B1C/B2a dual frequency receiver we designed and developed, including signal processing scheme, gain allocation, and frequency planning. The receiver matched the 0.1 ns precise synchronize time-frequency benchmark for the MWR system, verified by a static and dynamic test, compared with a time interval counter synchronization solution. Moreover, MWR equipment ranging accuracy is explored in-depth by using different ranging techniques. The test results show that MWR achieved 40 μm and 1.6 μm/s accuracy for ranging and range rate during tests, using synchronous dual one-way ranging (DOWR) microwave phase accumulation frame, and 6 μm/s range rate accuracy obtained through a one-way ranging experiment. The ranging error sources of the whole MWR system in-orbit are analyzed, while the relative orbit dynamic models, for formation scenes, and adaptive Kalman filter algorithms, for SFF relative navigation designs, are introduced. The performance of SFF relative navigation using MWR are tested in a hardware in loop (HIL) simulation system within a high precision six degree of freedom (6-DOF) moving platform. The final estimation error from adaptive relative navigation system using MWR are about 0.42 mm (range/RMS) and 0.87 μm/s (range rate/RMS), which demonstrated the promising accuracy for future applications of EGF and DEM formation missions in space. Full article
Show Figures

Figure 1

Article
Refocusing of Moving Ships in Squint SAR Images Based on Spectrum Orthogonalization
Remote Sens. 2021, 13(14), 2807; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13142807 - 17 Jul 2021
Cited by 1
Abstract
Moving ship refocusing is challenging because the target motion parameters are unknown. Moreover, moving ships in squint synthetic aperture radar (SAR) images obtained by the back-projection (BP) algorithm usually suffer from geometric deformation and spectrum winding. Therefore, a spectrum-orthogonalization algorithm that refocuses moving [...] Read more.
Moving ship refocusing is challenging because the target motion parameters are unknown. Moreover, moving ships in squint synthetic aperture radar (SAR) images obtained by the back-projection (BP) algorithm usually suffer from geometric deformation and spectrum winding. Therefore, a spectrum-orthogonalization algorithm that refocuses moving ships in squint SAR images is presented. First, “squint minimization” is introduced to correct the spectrum by two spectrum compression functions: one to align the spectrum centers and another to translate the inclined spectrum into orthogonalized form. Then, the precise analytic function of the two-dimensional (2D) wavenumber spectrum is derived to obtain the phase error. Finally, motion compensation is performed in the two-dimensional wavenumber domain after the motion parameter is estimated by maximizing the image sharpness. This method has low computational complexity because it lacks interpolation and can be implemented by the inverse fast Fourier translation (IFFT) and fast Fourier translation (FFT). Processing results of simulation experiments and the GaoFen-3 squint SAR data validate the effectiveness of this method. Full article
Show Figures

Graphical abstract

Article
CPISNet: Delving into Consistent Proposals of Instance Segmentation Network for High-Resolution Aerial Images
Remote Sens. 2021, 13(14), 2788; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13142788 - 15 Jul 2021
Abstract
Instance segmentation of high-resolution aerial images is challenging when compared to object detection and semantic segmentation in remote sensing applications. It adopts boundary-aware mask predictions, instead of traditional bounding boxes, to locate the objects-of-interest in pixel-wise. Meanwhile, instance segmentation can distinguish the densely [...] Read more.
Instance segmentation of high-resolution aerial images is challenging when compared to object detection and semantic segmentation in remote sensing applications. It adopts boundary-aware mask predictions, instead of traditional bounding boxes, to locate the objects-of-interest in pixel-wise. Meanwhile, instance segmentation can distinguish the densely distributed objects within a certain category by a different color, which is unavailable in semantic segmentation. Despite the distinct advantages, there are rare methods which are dedicated to the high-quality instance segmentation for high-resolution aerial images. In this paper, a novel instance segmentation method, termed consistent proposals of instance segmentation network (CPISNet), for high-resolution aerial images is proposed. Following top-down instance segmentation formula, it adopts the adaptive feature extraction network (AFEN) to extract the multi-level bottom-up augmented feature maps in design space level. Then, elaborated RoI extractor (ERoIE) is designed to extract the mask RoIs via the refined bounding boxes from proposal consistent cascaded (PCC) architecture and multi-level features from AFEN. Finally, the convolution block with shortcut connection is responsible for generating the binary mask for instance segmentation. Experimental conclusions can be drawn on the iSAID and NWPU VHR-10 instance segmentation dataset: (1) Each individual module in CPISNet acts on the whole instance segmentation utility; (2) CPISNet* exceeds vanilla Mask R-CNN 3.4%/3.8% AP on iSAID validation/test set and 9.2% AP on NWPU VHR-10 instance segmentation dataset; (3) The aliasing masks, missing segmentations, false alarms, and poorly segmented masks can be avoided to some extent for CPISNet; (4) CPISNet receives high precision of instance segmentation for aerial images and interprets the objects with fitting boundary. Full article
Show Figures

Graphical abstract

Technical Note
A Sparse Denoising-Based Super-Resolution Method for Scanning Radar Imaging
Remote Sens. 2021, 13(14), 2768; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13142768 - 14 Jul 2021
Abstract
Scanning radar enables wide-range imaging through antenna scanning and is widely used for radar warning. The Rayleigh criterion indicates that narrow beams of radar are required to improve the azimuth resolution. However, a narrower beam means a larger antenna aperture. In practical applications, [...] Read more.
Scanning radar enables wide-range imaging through antenna scanning and is widely used for radar warning. The Rayleigh criterion indicates that narrow beams of radar are required to improve the azimuth resolution. However, a narrower beam means a larger antenna aperture. In practical applications, due to platform limitations, the antenna aperture is limited, resulting in a low azimuth resolution. The conventional sparse super-resolution method (SSM) has been proposed for improving the azimuth resolution of scanning radar imaging and achieving superior performance. This method uses the L1 norm to represent the sparse prior of the target and solves the L1 regularization problem to achieve super-resolution imaging under the regularization framework. The resolution of strong-point targets is improved efficiently. However, for some targets with typical shapes, the strong sparsity of the L1 norm treats them as strong-point targets, resulting in the loss of shape characteristics. Thus, we can only see the strong points in its processing results. However, in some applications that need to identify targets in detail, SSM can lead to false judgments. In this paper, a sparse denoising-based super-resolution method (SDBSM) is proposed to compensate for the deficiency of traditional SSM. The proposed SDBSM uses a sparse minimization scheme for denoising, which helps to reduce the influence of noise. Then, the super-resolution imaging is achieved by alternating iterative denoising and deconvolution. As the proposed SDBSM uses the L1 norm for denoising rather than deconvolution, the strong sparsity constraint of the L1 norm is reduced. Therefore, it can effectively preserve the shape of the target while improving the azimuth resolution. The performance of the proposed SDBSM was demonstrated via simulation and real data processing results. Full article
Show Figures

Figure 1

Technical Note
Inversion of Geothermal Heat Flux under the Ice Sheet of Princess Elizabeth Land, East Antarctica
Remote Sens. 2021, 13(14), 2760; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13142760 - 14 Jul 2021
Abstract
Antarctic geothermal heat flux is a basic input variable for ice sheet dynamics simulation. It greatly affects the temperature and mechanical properties at the bottom of the ice sheet, influencing sliding, melting, and internal deformation. Due to the fact that the Antarctica is [...] Read more.
Antarctic geothermal heat flux is a basic input variable for ice sheet dynamics simulation. It greatly affects the temperature and mechanical properties at the bottom of the ice sheet, influencing sliding, melting, and internal deformation. Due to the fact that the Antarctica is covered by a thick ice sheet, direct measurements of heat flux are very limited. This study was carried out to estimate the regional heat flux in the Antarctic continent through geophysical inversion. Princess Elizabeth Land, East Antarctica is one of the areas in which we have a weak understanding of geothermal heat flux. Through the latest airborne geomagnetic data, we inverted the Curie depth, obtaining the heat flux of bedrock based on the one-dimensional steady-state heat conduction equation. The results indicated that the Curie depth of the Princess Elizabeth Land is shallower than previously estimated, and the heat flux is consequently higher. Thus, the contribution of subglacial heat flux to the melting at the bottom of the ice sheet is likely greater than previously expected in this region. It further provides research clues for the formation of the developed subglacial water system in Princess Elizabeth Land. Full article
Show Figures

Graphical abstract

Article
A Novel Method for Refocusing Moving Ships in SAR Images via ISAR Technique
Remote Sens. 2021, 13(14), 2738; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13142738 - 12 Jul 2021
Abstract
As an active microwave remote sensing device, synthetic aperture radar (SAR) has been widely used in the field of marine surveillance. However, moving ships appear defocused in SAR images, which seriously affects the classification and identification of ships. Considering the three-dimensional (3-D) rotational [...] Read more.
As an active microwave remote sensing device, synthetic aperture radar (SAR) has been widely used in the field of marine surveillance. However, moving ships appear defocused in SAR images, which seriously affects the classification and identification of ships. Considering the three-dimensional (3-D) rotational motions (roll, pitch, and yaw) of the navigating ship, a novel method for refocusing moving ships in SAR images based on inverse synthetic aperture radar (ISAR) technique is proposed. First, a rectangular window is used to extract the defocused ship subimage. Next, the subimage is transformed into the ISAR equivalent echo domain, and the range migration and phase error caused by the identical movement of all ship scatterers are compensated. Then, the optimal imaging time can be selected by the maximum image contrast search method. Finally, the iterative adaptive approach (IAA) is used to obtain the image with high resolution. This method has satisfactory imaging performance in both azimuth resolution and image focus, and the amount of calculation is small due to the processing of subimages. Simulated data and Gaofen-3 real SAR data are used to verify the effectiveness of the proposed method. Full article
Show Figures

Graphical abstract

Article
A Multi-Task Network with Distance–Mask–Boundary Consistency Constraints for Building Extraction from Aerial Images
Remote Sens. 2021, 13(14), 2656; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13142656 - 06 Jul 2021
Abstract
Deep-learning technologies, especially convolutional neural networks (CNNs), have achieved great success in building extraction from areal images. However, shape details are often lost during the down-sampling process, which results in discontinuous segmentation or inaccurate segmentation boundary. In order to compensate for the loss [...] Read more.
Deep-learning technologies, especially convolutional neural networks (CNNs), have achieved great success in building extraction from areal images. However, shape details are often lost during the down-sampling process, which results in discontinuous segmentation or inaccurate segmentation boundary. In order to compensate for the loss of shape information, two shape-related auxiliary tasks (i.e., boundary prediction and distance estimation) were jointly learned with building segmentation task in our proposed network. Meanwhile, two consistency constraint losses were designed based on the multi-task network to exploit the duality between the mask prediction and two shape-related information predictions. Specifically, an atrous spatial pyramid pooling (ASPP) module was appended to the top of the encoder of a U-shaped network to obtain multi-scale features. Based on the multi-scale features, one regression loss and two classification losses were used for predicting the distance-transform map, segmentation, and boundary. Two inter-task consistency-loss functions were constructed to ensure the consistency between distance maps and masks, and the consistency between masks and boundary maps. Experimental results on three public aerial image data sets showed that our method achieved superior performance over the recent state-of-the-art models. Full article
Show Figures

Graphical abstract

Article
A Convolutional Neural Network Based on Grouping Structure for Scene Classification
Remote Sens. 2021, 13(13), 2457; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132457 - 23 Jun 2021
Abstract
Convolutional neural network (CNN) is capable of automatically extracting image features and has been widely used in remote sensing image classifications. Feature extraction is an important and difficult problem in current research. In this paper, data augmentation for avoiding over fitting was attempted [...] Read more.
Convolutional neural network (CNN) is capable of automatically extracting image features and has been widely used in remote sensing image classifications. Feature extraction is an important and difficult problem in current research. In this paper, data augmentation for avoiding over fitting was attempted to enrich features of samples to improve the performance of a newly proposed convolutional neural network with UC-Merced and RSI-CB datasets for remotely sensed scene classifications. A multiple grouped convolutional neural network (MGCNN) for self-learning that is capable of promoting the efficiency of CNN was proposed, and the method of grouping multiple convolutional layers capable of being applied elsewhere as a plug-in model was developed. Meanwhile, a hyper-parameter C in MGCNN is introduced to probe into the influence of different grouping strategies for feature extraction. Experiments on the two selected datasets, the RSI-CB dataset and UC-Merced dataset, were carried out to verify the effectiveness of this newly proposed convolutional neural network, the accuracy obtained by MGCNN was 2% higher than the ResNet-50. An algorithm of attention mechanism was thus adopted and incorporated into grouping processes and a multiple grouped attention convolutional neural network (MGCNN-A) was therefore constructed to enhance the generalization capability of MGCNN. The additional experiments indicate that the incorporation of the attention mechanism to MGCNN slightly improved the accuracy of scene classification, but the robustness of the proposed network was enhanced considerably in remote sensing image classifications. Full article
Show Figures

Graphical abstract

Article
The Extraction of Street Curbs from Mobile Laser Scanning Data in Urban Areas
Remote Sens. 2021, 13(12), 2407; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13122407 - 19 Jun 2021
Abstract
The demand for mobile laser scanning in urban areas has grown in recent years. Mobile-based light detection and ranging (LiDAR) technology can be used to collect high-precision digital information on city roads and building façades. However, due to the small size of curbs, [...] Read more.
The demand for mobile laser scanning in urban areas has grown in recent years. Mobile-based light detection and ranging (LiDAR) technology can be used to collect high-precision digital information on city roads and building façades. However, due to the small size of curbs, the information that can be used for curb detection is limited. Moreover, occlusion may cause the extraction method unable to correctly capture the curb area. This paper presents the development of an algorithm for extracting street curbs from mobile-based LiDAR point cloud data to support city managers in street deformation monitoring and urban street reconstruction. The proposed method extracts curbs in three complex scenarios: vegetation covering the curbs, curved street curbs, and occlusion curbs by vehicles, pedestrians. This paper combined both spatial information and geometric information, using the spatial attributes of the road boundary. It can adapt to different heights and different road boundary structures. Analyses of real study sites show the rationality and applicability of this method for obtaining accurate results in curb-based street extraction from mobile-based LiDAR data. The overall performance of road curb extraction is fully discussed, and the results are shown to be promising. Both the completeness and correctness of the extracted left and right road edges are greater than 98%. Full article
Show Figures

Graphical abstract

Article
Remote Sensing Based Yield Estimation of Rice (Oryza Sativa L.) Using Gradient Boosted Regression in India
Remote Sens. 2021, 13(12), 2379; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13122379 - 18 Jun 2021
Abstract
Accurate and spatially explicit yield information is required to ensure farmers’ income and food security at local and national levels. Current approaches based on crop cutting experiments are expensive and usually too late for timely income stabilization measures like crop insurances. We, therefore, [...] Read more.
Accurate and spatially explicit yield information is required to ensure farmers’ income and food security at local and national levels. Current approaches based on crop cutting experiments are expensive and usually too late for timely income stabilization measures like crop insurances. We, therefore, utilized a Gradient Boosted Regression (GBR), a machine learning technique, to estimate rice yields at ~500 m spatial resolution for rice-producing areas in India with potential application for near real-time estimates. We used resampled intermediate resolution (~5 km) images of the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf Area Index (LAI) and observed yields at the district level in India for calibrating GBR models. These GBRs were then used to downscale district yields to 500 m resolution. Downscaled yields were re-aggregated for validation against out-of-sample district yields not used for model training and an additional independent data set of block-level (below district-level) yields. Our downscaled and re-aggregated yields agree well with reported district-level observations from 2003 to 2015 (r = 0.85 & MAE = 0.15 t/ha). The model performance improved further when estimating separate models for different rice cropping densities (up to r = 0.93). An additional out-of-sample validation for the years 2016 and 2017, proved successful with r = 0.84 and r = 0.77, respectively. Simulated yield accuracy was higher in water-limited, rainfed agricultural systems. We conclude that this downscaling approach of rice yield estimation using GBR is feasible across India and may complement current approaches for timely rice yield estimation required by insurance companies and government agencies. Full article
Show Figures

Graphical abstract

Article
Integration of InSAR and LiDAR Technologies for a Detailed Urban Subsidence and Hazard Assessment in Shenzhen, China
Remote Sens. 2021, 13(12), 2366; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13122366 - 17 Jun 2021
Abstract
Spaceborne interferometric synthetic aperture radar (InSAR) methodology has been widely successfully applied to measure urban surface micro slow subsidence. However, the accuracy is still limited by the spatial resolution of currently operating SAR systems and the lacking precision of geolocation of the respective [...] Read more.
Spaceborne interferometric synthetic aperture radar (InSAR) methodology has been widely successfully applied to measure urban surface micro slow subsidence. However, the accuracy is still limited by the spatial resolution of currently operating SAR systems and the lacking precision of geolocation of the respective scatters. In this context, high-precision urban models, as provided by the active laser point cloud methodology through light detection and ranging (LiDAR) techniques, can assist in improving the geolocation quality of InSAR-derived permanent scatters (PS) and provide the precise contour of buildings for hazard analysis. This paper proposes to integrate InSAR and LiDAR technologies for an improved detailed analysis of subsidence levels and a hazard assessment for buildings in the urban environment. By the use of LiDAR data, most building contours in the main subsidence area were extracted and SAR positioning of buildings via PS points was refined more precisely. The workflow for the proposed method includes the monitoring of land subsidence by the TS-InSAR technique, the geolocation improvement of InSAR-derived PS, and building contour extraction by LiDAR data. Furthermore, a reasonable hazard assessment system of land subsidence was developed. Significant vertical subsidence of −40 to 12 mm per year was detected by the analysis of multisensor SAR images. The land subsidence rates in the Shenzhen District obviously follow certain spatial patterns. Most stable areas are located in the middle and northeast of Shenzhen except for some areas in Houhai, the Qianhai Bay, and the Wankeyuncheng. An additional hazard assessment of land subsidence reveals that the subsidence of buildings is mainly caused by the construction of new buildings and some by underground activities. The research results of this paper can provide a useful synoptic reference for urban planning and help reducing land subsidence in Shenzhen. Full article
Show Figures

Graphical abstract

Article
Tests with SAR Images of the PAZ Platform Applied to the Archaeological Site of Clunia (Burgos, Spain)
Remote Sens. 2021, 13(12), 2344; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13122344 - 15 Jun 2021
Abstract
This article presents the first results obtained from the use of high-resolution images from the SAR-X sensor of the PAZ satellite platform. These are in result of the application of various radar image-treatment techniques, with which we wanted to carry out a non-invasive [...] Read more.
This article presents the first results obtained from the use of high-resolution images from the SAR-X sensor of the PAZ satellite platform. These are in result of the application of various radar image-treatment techniques, with which we wanted to carry out a non-invasive exploration of areas of the archaeological site of Clunia (Burgos, Spain). These areas were analyzed and contrasted with other sources from high-resolution multispectral images (TripleSat), or from digital surface models obtained from Laser Imaging Detection and Ranging (LiDAR) data from the National Plan for Aerial Orthophotography (PNOA), and treated with image enhancement functions (Relief Visualization Tools (RVT)). Moreover, they were compared with multispectral images created from the Infrared Red Blue (IRRB) data contained in the same LiDAR points. Full article
Show Figures

Graphical abstract

Back to TopTop