Next Article in Journal
Estimating Pasture Biomass Using Sentinel-2 Imagery and Machine Learning
Next Article in Special Issue
Land Subsidence in Wuhan Revealed Using a Non-Linear PSInSAR Approach with Long Time Series of COSMO-SkyMed SAR Data
Previous Article in Journal
Merging High-Resolution Satellite Surface Radiation Data with Meteorological Sunshine Duration Observations over China from 1983 to 2017
Previous Article in Special Issue
Monitoring Land Surface Deformation Associated with Gold Artisanal Mining in the Zaruma City (Ecuador)
Review

Earth Environmental Monitoring Using Multi-Temporal Synthetic Aperture Radar: A Critical Review of Selected Applications

1
Surrey Space Centre, University of Surrey, Guildford GU27XH, UK
2
Department of Electrical Engineering and Information Technology, University of Naples Federico II, 80131 Naples, Italy
3
Airbus Defense and Space, Guildford GU27XH, UK
4
Department of Civil Engineering, University of Salerno, 84135 Salerno, Italy
*
Author to whom correspondence should be addressed.
Academic Editor: Stefano Perna
Received: 24 December 2020 / Revised: 27 January 2021 / Accepted: 3 February 2021 / Published: 8 February 2021
(This article belongs to the Special Issue Multi-temporal Synthetic Aperture Radar)

Abstract

Microwave remote sensing has widely demonstrated its potential in the continuous monitoring of our rapidly changing planet. This review provides an overview of state-of-the-art methodologies for multi-temporal synthetic aperture radar change detection and its applications to biosphere and hydrosphere monitoring, with special focus on topics like forestry, water resources management in semi-arid environments and floods. The analyzed literature is categorized on the base of the approach adopted and the data exploited and discussed in light of the downstream remote sensing market. The purpose is to highlight the main issues and limitations preventing the diffusion of synthetic aperture radar data in both industrial and multidisciplinary research contexts and the possible solutions for boosting their usage among end-users.
Keywords: change detection; flood mapping; forestry; multi-temporal analysis; synthetic aperture radar; water resources change detection; flood mapping; forestry; multi-temporal analysis; synthetic aperture radar; water resources

1. Introduction

Climate changes and the rapid increase of the population constitute important stress factors for the Earth ecosystem. Biosphere and hydrosphere, crucial for human communities, are particularly threatened nowadays. In this context, satellite observations represent a key asset for large scale environmental monitoring due to their capability to provide synoptic information with high revisit time useful to public and private decision-makers either for planning activities or for promptly responding to emergency situations.
Remote sensing has been variously defined in the past literature. Basically, it is “the practice of deriving information about the Earth’s land and water surfaces using images acquired from an overhead perspective, using electromagnetic (EM) radiation in one or more regions of the electromagnetic spectrum, reflected or emitted from the Earth’ surface” [1]. This definition highlights the most important themes of the topic, i.e., the gathering of information at a distance and the use of pictures or series of pictures to represent this information. Synthetic Aperture Radar (SAR) sensors are of particular interest in remote sensing thanks to the peculiar characteristics of microwaves, which are able to penetrate clouds resulting insensitive to weather and illumination conditions [2].
However, SAR sensors are still scarcely used in multidisciplinary and operative environments, which are largely dominated by the exploitation of multispectral (MS) and hyperspectral (HS) data [3], with the availability of the latter increasing thanks to the recent launch of the Italian spaceborne sensor PRISMA [4]. This is principally due to the higher information content they hold due to the wider available bandwidth and to their intuitive interpretability. This allowed for developing simple and robust solutions for the extraction of geophysical parameters from images [5] being possible, as an example, to retrieve the extent of water surfaces [6] or of a forest [7] by just applying a ratio of appropriate spectral bands.
In the case of SAR, the available bandwidth in the single acquisition is lower, and this makes it difficult the information extraction process. This can be compensated by exploiting temporal information. However, robust algorithms for the retrieval of even basic information from data are missing, with few exceptions relevant to the maritime domain [8] or millimeter-scale displacements estimation using Differential SAR Interferometry (DInSAR) methodologies [9], for which the landscape is richly populated by users and applications, able to generate value also in the industrial sector. In almost all the other cases, SAR data processing is mostly limited to scientists and requires knowledge in electromagnetics, radar and signal processing domains. This makes the overall capability of this imaging system highly underexploited by end-users, and with an exploitation route not yet identified, the demonstration of the technology remains difficult.
The objective of this review is to highlight how SAR observations can support environmental monitoring activities through a discussion on the latest research exploiting multi-temporal concepts for adding value to data in some selected applications. Despite their scarce diffusion in the end-user community, SAR data can enable several applications, especially in near-real time, or give fundamental contribution through data assimilation in geophysical/hydrological/weather models and/or integration with multisource information. Due to the variety of the field, a special focus will be dedicated to some applications related to biosphere and hydrosphere monitoring, while many others dealing, as an example, with the urban environment [10,11,12], classification [13,14,15] or the estimation of small [16,17,18] and large [19,20,21] displacements will be only eventually mentioned.
The lifecycle of SAR data, from the acquisition to the generation of value-added products, the so-called Level-2 products, i.e., those carrying some geophysical information about the imaged scene, is shown in Figure 1.The successful information process requires an adequate filling of the information gap between the measurement stored in Level-1 products (i.e., those released by data providers) and the parameters that can be of interest for applications and/or remote sensing data users.
Most of the literature succeed with this through algorithms able to cover the jump from the ground segment to the user segment. Some authors expressed the need to restore the central role of analysts and users in the remote sensing information process [5,22,23], as they are penalized by the extreme automation required to process large datasets like those involved in multi-temporal analysis. This led to the definition of new classes of products, obtained through innovative combination of temporal information, able to introduce an intermediate processing level [5,15] (see Figure 1) which can be further developed toward the Level-2 or serve as support for visual interpretation [24]. This process model represents a recent alternative to classic SAR workflows and goes toward the user community. Its aim is to increase the exploitation of SAR information from scientists and professionals who may have the data access and the analytical/critical capability for integrating data in their processes, but not necessarily the mathematical background to understand the phenomena which generated that particular representation [5].
In Reference [25], it is argued that two distinct forms of knowledge exist, i.e., the objective knowledge and the subjective knowledge. When images are acquired beyond the visible spectrum, like in the case of SAR, the understanding and the interpretation of the physical principles leading to that particular measurement and/or arrangement of pixels is a matter of subjective knowledge. The higher the expertise of the analyst, the higher the probability to successfully complete the information process at hand. Classic Level-1 SAR products can be unsuitable for the extraction of relevant information, which is the reason turning many multidisciplinary users toward other data sources they can more easily handle. This brings to the concept of emergent semantics. According to [26], the information content of an image is not an intrinsic property but an emergent characteristic through the interaction that its users have with it, i.e., it is somewhat contextual, being dependent on the conditions in which a particular query is made and on the user. One of the objectives of the introduction of the aforementioned improved representations of SAR images is to make this context more uniform and less dependent on the specific background of the operator who, having a better understanding of data, should be able to make better decisions about how to extract relevant information from them.
This process can be sketched out through a semiotic paradigm, i.e., the Peirce triangle [27], which provides a schematic representation about how a concept is formed in our mind. As shown in Figure 2, it is composed of three inter-related elements, i.e., the sign, the object and the interpretant.
The sign is represented by everything that can be perceived. In this context is the real world as filtered by the imaging sensor. The object is what the sign concerns with and exists independently on the sign, i.e., it is the physical real-world object. Finally, the interpretant is the understanding that an observer reaches of some sign/object relation. When the sign is an image acquired beyond the visible spectrum, the formation of the interpretant can be hampered by a representation that is not familiar for many of its potential users. This can be mitigated by the introduction of a representation going towards the expectation of the remote sensing data user, whose habit is often that of dealing with MS data and/or geographical information systems (GISs).
However, as argued in [28], “a representation is a formal system making explicit certain entities or types of information.” As an example, the Arabic, Roman and binary numeral systems constitute formal systems for representing numbers. In particular, the Arabic system, being decimal, allows for an explicit representation of the number decomposition into powers of 10. Therefore, forty-two equals 4 × 101 + 2 × 100 = 42. In the binary system, this would be 0101010, in the Roman one XLII. If the objective is to use this number for some calculation using a machine, the binary representation is the best one, provided that the output shall be converted in the base 10 we are used to. Instead, if the objective is to understand whether a number is or is not a power of ten, then the Arabic representation is more powerful. The usefulness of a representation depends on how well suited it is for a specific purpose. As an example, when the aforementioned multi-temporal composites are exploited, the information highlighted is the one concerning the land cover and its temporal changes, but the phase information is lost. As a consequence, this representation should not be used if the objective of the analysis is the estimation of land subsidence. Even the speckle, which is considered by the most an annoying characteristic of radar imaging, can turn in a source of information since it can be tracked for estimating large displacements using cross-correlation similarity measures [19] or exploited to detect the presence of urban areas through its peculiar statistics [29].
In the following, it will be discussed how the literature aimed at the filling of the information gap sketched in Figure 1 using time series of SAR data, in both classic and innovative representations of their information content. In particular, Section 2 is dedicated to classic change detection, with an opening about best practices for data pre-processing. Section 3 addresses innovative change detection approaches by means of higher-level data representations. Section 4 deals with applications. It is intended that the wideness of the topic does not allow a comprehensive review of all the existent in a single work. Therefore, the focus is established on selected applications dealing with the biosphere and the hydrosphere. Discussion is provided in Section 5, together with some market insights. Conclusions are drawn at the end of the work.

2. Classic SAR Multi-Temporal Pre-Processing and Change Detection Approaches

When performing multi-temporal analysis, it is crucial that data are adequately pre-processed to ensure their comparability at geometric and radiometric levels. In the SAR literature, this phase is very well established. Therefore, the objective of this Section is to briefly recall it for the sake of completeness and the ease of the reader.
The general schema of the (amplitude) multi-temporal SAR pre-processing workflow accounts for data calibration, coregistration and despeckling. Calibration is essential for making images radiometrically comparable, i.e., this operation ensures that an object/surface, if unchanged, exhibits the same reflectivity function along the whole time series [30]. This operation is implemented through calibration coefficients provided in the image metadata taking into account sensor and orbit effects. In case of stripmap and spotlight data, usually one calibration constant for the whole image is applied (see as an example [31,32] for the calibration procedures for COSMO-SkyMed and TerraSAR-X data, respectively). Dealing with SCANSAR or TOPSAR acquisitions, instead, a calibration vector/matrix is provided in order to account for signal attenuation phenomena in the slant range direction. The application of such calibration coefficients makes data reliable to fit practically all the applications today implemented with satellite imagery. As an example, COSMO-SkyMed single-look complex products have a declared radiometric calibration accuracy smaller than 1 dB [33]. The estimated values for Sentinel-1A/B images are smaller than 0.40 dB [34].
Coregistration is essential to make images geometrically comparable. It ensures that each target on the ground corresponds to the same pixel in all the images of the time series. This operation is probably the most robust of the SAR pre-processing chain, with standard algorithms allowing to achieve an alignment in the order of fractions of pixel [35].
Despeckling is crucial to enhance the information content of SAR images [36]. Speckle is due to the presence of targets with sub-resolution dimensions within the resolution cell. The returned echo is determined by the coherent summation of all the contributions given by the single scatterers, and this causes a random return even from homogeneous areas and the typical salt and pepper appearance of data [35], which can be mitigated with appropriate denoising techniques. This is one of the most discussed topics of the SAR literature, with several papers published each year. The availability of multi-temporal datasets allows for obtaining excellent performance by exploiting the synergy of spatial and temporal information [37,38] even through the application of the non-local paradigm [39]. Today, the frontier of the research in this field is the exploitation of deep learning concepts [40,41,42,43].
Pre-processing operations make data ready to be treated for temporal information extraction or for building higher-level representations according to the framework introduced in Section 1. In both cases, the objective is to identify changes in the imaged scene and, eventually, assign a higher-level semantics to changing patterns based on their different temporal electromagnetic response.
Leaving out classification aspects, which probably deserve a separate review due to the wideness of the topic and possible methodologies, in the following, the discussion will be focused on classic change detection approaches, i.e., those aiming at the association sign-object sketched in Figure 2 without resorting to any interpretant. The information gap graphically represented in Figure 1 is filled using an algorithm, often strongly parametrized as a function of the target to be identified, basically excluding the operator from the information processing. This can be due either to the necessity of extreme automation, as an example to deliver 24/7 services, or to the difficulties related to the algorithm set-up, requiring users with strong technical background.
In literature, change detection is defined as the identification of the differences in the state of an object/target/pattern by observing it at different times [44]. This is one of the most discussed topics in remote sensing [45] and represents the simplest declination of multi-temporal analysis, being the process usually reduced to couples of images. In the SAR community, the change detection is typically implemented through the segmentation of an opportunely defined change indicator [46]. The literature provides a variety of solutions. The simplest one is the difference operator, actually mostly used to process MS data [47]. However, its application causes differences in the detection accuracy in high and low intensity areas [48]. This leads to the ratio operator being the most exploited, as more stable with respect to changes occurring in areas characterized by different reflectivity [49] and the detection more robust to speckle and calibration inaccuracies [50].
Typically, the ratio image is expressed in logarithmic scale in order to make its distribution more symmetrical and to enhance the differences between changed and unchanged pixels [50,51,52]. Other change indicators introduced in the past literature include normalized multi-temporal band comparison [53,54], information theoretical similarity measures (such as the Kullback–Leiber divergence [55] and the mutual information [56]) and likelihood ratios [57,58,59,60].
The decision level is usually addressed through threshold segmentation of the selected information layer. However, this operation may be critical [56]. Basic algorithms working on the histogram analysis, like the Otsu thresholding [61], usually fail on SAR images because its hypothesis (bi-modal distribution and classes equally represented on the scene) are typically not respected. Empirical trial-and-error thresholding [49] and semi-supervised clustering are not applicable to operative scenarios because they require strong supervision and/or rely on the availability of relevant training samples. Bayes theory and the Kittler–Illingworth minimum-error thresholding have been exploited to toughen automatic thresholding methodologies [50,62,63,64], which represent the most adopted solutions in literature. Fully multi-temporal change indicator, thus working on the entire time series rather than on a bi-temporal basis, like the absolute change estimator [65], the generalized likelihood ratio test [57] and the second and third order texture log-cumulants [66] have been also proposed. In these cases, being the processing typically accounting for an average behavior of the scene, the time at which the change happens influences the results. If the change happens at the beginning of the time series, its effects are summed up along the time, so there are more possibilities that it is highlighted in the final change layer.
In the last years, the increased availability of multisource data brought to the development of change detection techniques able to combine heterogeneous data-cubes [67], where SAR and MS data constitute the preferred sources of sensory information. According to [68], the fusion can take place at the pixel-, feature- and decision-level.
In the decision-level fusion, the diverse datatypes follow an independent workflow for information extraction up to the final merge with decision rules tailored on the specific case study. This is probably the less adopted fusion solution in change detection problems. An example of this methodology can be found in [69], where the authors proposed to combine deforestation maps separately derived from ALOS-PALSAR and Landsat data.
In the pixel-based fusion, data are combined at sensory data level. This approach has been used as an example in [70], where the authors presented a fusion framework for deforestation detection based on an optimized regression model of MS and SAR time series.
The feature-based fusion approach requires the extraction of features and/or objects from multisensor data. They are combined using statistical methods, fuzzy logic or artificial neural networks. This is the most adopted approach to data fusion applied to change detection problems.
As an example, Poulain et al. [71] proposed to detect new buildings by extracting several features indicative of the presence of such structures from both SAR and MS images and combining them using the Dempster–Shafer theory of evidence [72]. The feature-based fusion of SAR and MS data is also exploited in [73] for buildings damage assessment after an earthquake. In this work, the optical image is used to retrieve the geometric parameters of the intact building, whose SAR response is simulated and compared with that of the real image. The presence of differences between the simulated signature and the real one is an indicator of the fact that the building is damaged. Reference [74] adopted feature-based fusion of MS and SAR images in a geographic information system (GIS) environment for detecting environmental hazards due to cattle-breeding facilities in Southern Italy. Polychronaki et al. [75] proposed an object-based methodology to fuse ALOS-PALSAR with optical images to detect fire scars in Mediterranean forests. This approach (see [76] for a complete review) is generally more robust and flexible than the pixel-based one. However, it introduces several challenges related to proper image segmentation and to the retrieval of relevant attributes from objects [75,77,78]. This makes it difficult to be introduced in multidisciplinary operative contexts.
These principles have been applied to address several SAR remote sensing problems and applications like urbanization [10,79] and its consequences on the urban climate [11], flood [80] and deforestation [81] mapping, water resources management [82], large displacements estimation of terrains [21,83,84] and glaciers [19,20,85] using intensity tracking algorithms and recovering after natural disasters [73]. An insight of the literature produced about some of them, namely, forestry, water resources management and flood mapping will be provided in Section 4.

3. Change Detection Using Higher-Level Multi-Temporal Representations

In the Introduction, it has been pointed out that an alternative to classic SAR processing is possible by introducing higher level multi-temporal representations exploiting appropriate color-coding of the microwave information. This processing paradigm allows for the formation of a relevant interpretant of the relation sign-object sketched in Figure 2, thus improving the user experience with data and its understanding of the EM response of the scene.
This concept has been variously employed in the literature. References [86,87] used incoherent bi-temporal RGB compositions to enhance the presence of flooded areas and assessing building damages after a typhoon, respectively. The exploitation of the interferometric coherence in the generation of bi-temporal SAR color composites has been proposed in [88] with the purpose of mapping floods. Reference [89] introduced a new class of bi-temporal SAR products, known as Level-1α, obtained by the combination of both intensity and phase information within a more general-purpose processing framework. Alves et al. [90] exploited three-temporal fusion of Cassini images to enhance the visualization of Titan drainage networks. The same strategy was employed by [91] to highlight the presence of floods. Schmitt et al. [92] developed a methodology for colorizing Sentinel-1 images using a variational autoencoder conditioned on Sentinel-2 data. More general methodologies for displaying changes in SAR time series data have been developed in [93] with the REACTIV technique (free software available at https://w3.onera.fr/medusa/content/reactiv (accessed on 1 February 2020)) and in [15] with the introduction of Level-1β products.
In Figure 3, the comparison between the standard SLC representation and its correspondent change-detection Level-1α product [89] is shown. The scene has been acquired in a rural area of semi-arid Burkina Faso by the COSMO-SkyMed sensor in stripmap mode with three meters spatial resolution during the wet season, when a land cover with abundance of water and vegetation is expected [82]. However, this is not immediately understandable from the standard SLC SAR image reported in Figure 3a due to (i) the typical gray-scale representation of data, which is related to the reflectivity function of the particular land cover type and (ii) the presence of speckle hampering the rising of relevant textures useful to discriminate different image features. This representation is not suitable, as an example, to distinguish vegetation from bare soils.
The understanding of the observer and his/her capability to make correct decisions in the information processing at hand can be improved by exploiting a multi-temporal higher-level change-detection oriented representation following the process model introduced in Section 1. The product shown in Figure 3b has been built by combining an image acquired during the dry season, when the landscape is almost completely dry, with the wet season acquisition previously introduced. In particular, the dry season image has been loaded on the blue channel and constitutes the reference situation for the identification of changes. The wet season image is loaded on the green band, while the red channel is reserved to the interferometric coherence.
The reader should note how this representation, known in literature as Level-1α [5], is much more informative than the standard Level-1. At first, it allows to immediately identify patterns of growing vegetation, depicted in green color, and the presence of temporary water, rendered in blue. Moreover, the introduction of a full multi-temporal processing, allowing for the exploitation of temporal despeckling techniques [37], makes image textures explicit. This is helpful in the visual segmentation of homogenous patterns because the edges are better defined. Finally, it enables the extraction of higher quality texture measures, mostly originally defined based on optical/consumer images [94] not affected by speckle, which can be used as information layers in automatic change detection and/or classification methodologies [95].
The introduction of a color representation of the microwave backscattering also makes the information mining of the operator easier, especially for those features which appearance is congruent with the natural color palette. This happens, as an example, for temporary water surfaces, displayed in blue, and vegetation, rendered in green. In the first case, the color of the composition is due to the dominance of the terrain scattering of the dry season image over that of the water surface covering the basin area during the wet season. In the same way, the green color of vegetation is due to volumetric backscattering enhancement triggered by the growth of crops and grass during the wet season [96].
For the other features, the composition returns a false color determined by their temporal backscattering behavior. As an example, soils remaining bare all over the year exhibit a balance of the blue and green channels (see the rightmost part of Figure 3b). Trees, having high and stable backscattering all over the year, are rendered in cyan. Permanent water surfaces are displayed in black color due to their low and stable backscattering. The usage of the interferometric coherence is important to separate man-made targets (usually stable with respect to the signal phase) from highly reflective natural targets. Its contribution, together with that of the intensity channels, makes these features, i.e., small human settlements scattered across the study area, rendered in white in Figure 3b.
One of the basic problems in computer vision is to allow the observer to segment the image into meaningful regions [97] preventing the raise of bright saturated areas and/or non-natural colors which have been judged to be distracting and confusing [98]. In other words, it is fundamental that the visualization favors the observer pre-attentive processing, i.e., his/her unconscious accumulation of information from the surrounding environment [99].
In Figure 4, the comparison between a Level-1α product [5] (see Figure 4a) and a bi-temporal color composite obtained as described in [86] is shown (see Figure 4b). The imaged area is that of two small reservoirs in semi-arid Burkina Faso. The RGB images have been obtained by using as a reference situation for the identification of changes an image acquired during the dry season. Test data are constituted by an acquisition made during the wet season.
In particular, in Figure 4b, the reference dry season image has been loaded on the blue band, the test wet one on the green one, while the red channel is given by the difference between the two aforementioned channels. To allow for a fair comparison, a multi-temporal De Grandi despeckling using the same number of images employed to process data exploited for building the Level-1α product of Figure 4a has been applied.
The reader should easily note that the rendering of the information between Figure 4a,b is completely different. In the Level-1α product, the reservoirs are displayed in blue color. As explained before, this is due to the dominance of the terrain scattering of the reference (dry) situation with respect to that produced by the water layer during the wet season. Using the composition proposed in [86], this phenomenology results in a significant contribution of the red band (i.e., the difference image) which turns the color of the composition to magenta. Moreover, in Figure 4b it is not possible to identify any of the small settlements scattered around the lakes which are visible in Figure 4a.
Overall, it is possible to argue that the use of the composition reported in Figure 4b would hamper the general understanding of the observer by giving a quite distracting and confusing representation of the EM response of the scene. This composition is more oriented to the enhancement of changing patterns, but it may be not so effective for labeling activities. In other words, if the operator is not aware of the meaning of the change he is observing from data, it could be difficult to assign a relevant label to it because the representation of the information is not so helpful in the formation of the interpretant of the relation between the sign and the object. On the other side, in Level-1α representation, the enhancement of the changes is displayed in such way to help the operator in the assignment of a relevant semantics to them.
The Level-1α representation is very well-suited for highlighting the presence of temporary water surfaces; these features are rendered in blue color [5], that is, the kind of rendering the non-expert observer expects for, as an example, a flooded area. However, if the objective is the monitoring of land, then this representation can be unpleasant since the rendering of terrains is far from the natural color palette (see Figure 5a). In this case, it can be useful to exchange the role of the interferometric coherence and of the image used as reference for the identification of changes [89], i.e., to swap the red and the blue channels. This leads to the representation depicted in see Figure 5b, which is closer to the natural color palette being a significant red component retrieved in the rendering of terrains. Clearly, in this case, flooded areas would be rendered in red tonalities, as well as all the phenomena related with a decrease of the backscattering with respect to the reference situation like, as an example, removal of vegetation canopies.
Another way to highlight the changes occurring in a scene is to exploit the temporal statistics of the SAR series [15,65,66,93]. In Figure 6, some fully multi-temporal products are reported. The first (see Figure 6a) has been generated by the chain proposed in [15], and it is known as Level-1β image. The second (see Figure 6b) represents the output of the REACTIV technique discussed in [93]. The third (see Figure 6c) is absolute change indicator map obtained as explained in [65]. The last one is the second order log-cumulants map produced as discussed in [66]. In all the cases, the input is constituted by a time-series of 6 images acquired over the city of Castel Volturno (Italy) between April and October 2010 by the COSMO-SkyMed sensor in stripmap mode with three meters spatial resolution.
As explained in [15], the channels composing the RGB frame of a Level-1β product are extracted from the statistics of the time series. In particular, the red band is given by the temporal backscattering variance and the green channel by the average backscattering. The blue one is obtained by a combination of the interferometric coherence and of the saturation index, i.e., the normalized backscattering span calculated pixel-wise. In particular, the interferometric coherence is used when its temporal average is above a user-defined threshold, and therefore, it is significant only in presence of man-made targets. Otherwise, the saturation index is displayed.
This leads to a color-coding related to the dynamics of the scene land cover. Looking at Figure 6a, sea appears in almost pure blue color due to its average low backscattering, sometimes subject to spikes caused by waves. Stable land cover, mainly grasslands, exhibit dominance of the green band, representative of the average backscattering. Urban areas are rendered in cyan due to their highly temporally stable backscattering and their high stability with respect to the phase signal. Changing areas are characterized by high variance, i.e., significant contribution from the red channel, which can be observed on crops. They are rendered in yellowish and pinkish color depending on the crop type and/or the occurrence of events like the harvesting, which cause the rise of the saturation index contribution.
The principles driving the temporal composite depicted in Figure 6b are different [93], being there the colors associated to the time in which the change is detected rather than to the type of change, and this makes the association color–phenomenology difficult. Moreover, only a fraction of the changing patterns visually detectable in Figure 6a is highlighted in the representation of Figure 6b, which tends to enhance only abrupt changes and thus is more oriented to the application to long time-series.
The time at which the change happens is also crucial when traditional fully multi-temporal change detection operators like those presented in [65] (see Figure 6c) and [66] (see Figure 6c) are considered. As discussed in Section 2, they tend to privilege abrupt changes occurred at the beginning of the time series being the output map basically a cumulative layer. These products are not able to properly catch slow changes lasting all over the time series, like the growth of crops, and this makes most of the agricultural land of the analyzed scene quite uniform and not well separated from unchanging features. As for the REACTIV output, these products are more oriented for the analysis of long and dense time-series with higher probability of occurrence of abrupt and long-lasting land cover variations.
As a general comment, Level-1β images are a more flexible analysis tool with respect to the other considered products. They offer a user-friendly representation of the SAR information with a balanced color displaying, characterized by limited occurrence of saturated and/or distracting patterns. They are able to highlight both slow and abrupt changes and to adequately separate them through the rendering in different colors. The physical-based association between the color of the composition and the change occurred on the scene makes the assignment of a relevant semantics (i.e., label) to the detected changing patterns easier. This is also useful for land cover type classification purposes, while the other analyzed products mainly allow only for the identification of changes.
The reviewed change detection literature has been summarized in Table 1, categorized based on the methodology and the application domain. More insights and comments on the different approaches are provided in Section 5.

4. Applications

In this Section, the concepts introduced in Section 2 and Section 3 will be elaborated in an application-oriented environment, with particular reference to forestry and water resources. The latter are analyzed from both perspectives of scarcity in semi-arid environments and of hazards due to floods. The objective is to provide a complete picture of the literature on these topics highlighting the different approaches and their limitations.

4.1. Forestry

Forests currently cover about 40% of Earth’s ice-free land surface although, in the last centuries, a large fraction of forestland has been converted to agricultural and urban uses. Nevertheless, we are dependent on them for the production of paper products, lumber and fuelwood and, most important, for the preservation of the Earth ecosystem, being forests the sentinels of the carbon cycle [100] and the core of Earth’s biodiversity. With projected increases in human population and rising standards of living, the importance and the pressure on forests is expected to increase. The challenge is to set-up protocols and methodologies allowing for the sustainable management and exploitation of this critical resource [101].
Despite of some critical positions risen in the past literature [102,103,104], the role of remote sensing as a proxy for effective sustainable forest management is crucial. This is because its implementation requires synoptic and repetitive information about several geophysical/biochemical data to be delivered to decision-makers to support their planning efforts [105,106]. As suggested in [107,108], many of them, like forests extent by type [109], topography [110], fragmentation and connectedness of forest ecosystem components [111], areas damaged by disease and insects infestation [112], areas damaged by fires [113] or deforestation [81], areas successfully naturally and/or artificially regenerated [114,115] and above ground biomass (AGB) volume [116], can be fully or partially addressed using remote sensing.
Remote sensing of forests can be effectively implemented using MS data [117]. Some operational services for temporal monitoring of forests already exist like the PRODES [118] and the DETER-B (data available for download at http://www1.dpi.inpe.br/obt/deter/dados/ (accessed on 1 February 2020)) [119] operative in the Amazon and the Global Land Analysis and Discovery (GLAD, datasets available for download at https://www.glad.umd.edu/dataset (accessed on 1 February 2020)) [120]. Landsat data have been quite recently exploited to map global forest loss and gain from 2000 to 2012 at a spatial resolution of 30 m [121]. In this study, more than 650,000 Landsat 7 Enhanced Thematic Mapper Plus (ETM+) scenes were analyzed to create a cloud-free dataset. However, in tropical regions, massive cloud-coverage all over the year [122] and the fast recovery of vegetation can hamper the detection of changes in forest cover. For this reasons, some authors expressed concern regarding the validity of this map [123,124].
The use of SAR of course solves the problem of cloud coverage but introduces several others related to the complex scattering mechanisms triggered by vegetation layers [96]. In this case, the role of the wavelength is crucial, since it determines the penetration depth of the electromagnetic radiation within the forest canopy. This can be roughly explained looking at Figure 7. Working at short wavelengths, i.e., those corresponding to X-band (about 3 cm) and C-band (about 5 cm) the main contribution to the backscattering is that provided by the upper strata of the forest canopy, mainly constituted by leaves and twigs [125]. At C-band, some return by small secondary branches is also possible. Using longer wavelengths, i.e., that corresponding to L-band (about 27 cm), the canopy becomes mainly transparent, and the backscattered signal is due to primary branches and the trunk. Complete studies about the radar backscattering from forest have been provided by Hoekman for the X-band [126], Westman and Paris [127] for the C-band and Richards et al. for the L-band [128].
The difference in the backscattering mechanisms triggered by different wavelengths strongly influences the suitability of the sensor with the application. As an example, many authors suggested the exploitation of short wavelengths to classify different tree species. Rignot et al. [129] claimed that airborne C-band images perform better than L- and P-band ones data over a boreal forest. In particular, the authors concluded that the branching geometry and foliage of certain tree species (like the white spruce and leafless balsam poplar) highly affects the C-band SAR signal facilitating the separation of the species. Similar conclusions concerning the potential of the X-band were drawn in [126], [130,131,132]. On the other hand, Reference [133] reported the inconsistences of X-band images to separate broadleaved and coniferous species. Saatchi and Rignot [134] found that P and L bands are better at performing the discrimination between jack pine and black spruce as well as between coniferous and broadleaved species, and this conflicts with the findings reported in [129]. These contradictory results suggest that the choice of an appropriate band depends on the examined forest, its structure being widely variable with age, management and biome [135].
When the application is the mapping of forest disturbances, the literature landscape is dominated by the exploitation of long wavelengths, which are able to maximize the contrast between forest features and bare lands and pastures being more sensitive to the AGB content within the resolution cell. Most of the literature is therefore focused on the exploitation of data acquired by the Japanese sensors JERS-1 (1992–1998), ALOS-PALSAR (2006–2011) and ALOS-PALSAR-2 (since 2014), whose acquisition strategies were designed to produce annual global mosaics, thus resolving previous spatial coverage issues. Thanks to these mosaics, Reference [136] produced the first SAR-based annual global maps of forest/non-forest cover. Global mosaics can be freely downloaded at https://www.eorc.jaxa.jp/ALOS/en/palsar_fnf/fnf_index.htm (accessed on 1 February 2020).
All this constituted the base for a number of works. Mermoz and Le Toan [137] mapped forest disturbances and regrowth in Vietnam, Cambodia and Lao PDR from 2007 to 2010. Reference [138] explored the feasibility of the detection of forest disturbances from repeat-pass interferometric coherence magnitude of ALOS images. Almeida-Filho et al. [139] used multi-temporal JERS-1 images to detect deforestation patterns in the Brazilian Amazon. They interestingly observed that, under particular circumstances, deforestation areas might be characterized by higher backscattering values than undisturbed primary forest. This is due to the fact that the conversion from primary forest to pastures or agricultural fields presents different phases like slashing, burning and clearing (i.e., the removal of stems and branches that were not entirely consumed by fire). Thus, depending on the acquisition time of the SAR image, the electromagnetic return from deforested areas can be stronger than that from primary forest, contrary to the general premise that the latter exhibits higher reflectivity. Similar findings were reported by [140], in which the authors ascribed the increase in the backscattering coefficient just after deforestation detected through Landsat images to felled trees left on the ground. Small-scale deforestation and forest degradation in Mozambique and their relation with the carbon stock have been studied in [141]. The suitability of ALOS-PALSAR data for detecting new fronts of deforestation in the Brazilian Amazon was assessed in [142]. Motohka et al. [143] used an annual time series of ALOS-PALSAR mosaics to automatically detect deforestation in Indonesia. They suggested that HV polarized data represent an effective parameter for deforestation detection when a fixed threshold was used.
Actually, the choice of the polarization (or combination of polarizations) to be used for deforestation mapping is controversial [137]. Some studies exploited the cross-polarized channel only [143,144,145], while other suggested the combination of co- and cross-polarized channels [142,146]. In any case, it is clear that the availability of the cross-polarized channel is crucial for forest studies, since the scattering from vegetation layers is likely to cause signal de-polarization, thus making this channel more sensitive to the presence of such structures with respect to the co-polarized one [125].
An interesting study made by Walker et al. [147] compared the results of forest mapping obtained using ALOS-PALSAR and Landsat data. The authors concluded that, although increasingly robust, SAR data should not be considered as a replacement for MS data sources. However, they encourage the research on methodologies exploiting the synergy between the all-weather/all-time imaging characteristics of microwaves and the direct link with biomass proper of MS acquisitions to provide the most effective means for reliable forest monitoring. This is the way followed, as an example, in [69,70,148] who proposed multi-sensor data fusion of multi-temporal ALOS-PALSAR and Landsat data for mapping and monitoring of tropical deforestation.
The wide availability of C-band data due to the Sentinel-1 mission is also re-opening the research on forestry using this particular wavelength (like it was in the past thanks to the ERS [149,150], SIR-C [81,130] and ENVISAT [151] missions). Clearly, the shorter wavelength compared with the L-band limits the penetration depth into the canopy, and this makes data less suited for forest disturbances assessment [152]. Reference [153] used Sentinel-1 datasets (mostly bi-temporal) for detecting forest fires. Reiche et al. [154] proposed to mitigate the limitations of the C-band through multi-sensor data fusion with ALOS-PALSAR-2 and Landsat MS images. Forest mapping using both phase and amplitude of Sentinel-1 InSAR stacks has been recently presented in [155]. A particularly innovative methodology has been recently introduced by Bouvet et al. [152], who used the shadowing effect typical of side-looking SAR acquisition geometry to identify new deforestation patterns. In this work, the authors also pointed out that the C-band backscattering of deforested areas is not necessary decreasing compared to that of an undisturbed forest. Following similar considerations made in [139] for the L-band, they argue that deforested areas that are cleaned or burnt typically exhibit lower backscattering, but patches with leftovers remaining on the ground may show similar or higher return than intact forests due to a branches-ground double-bounce scattering mechanisms. The same phenomenon can be due to increased moisture in deforested patterns because of rainfalls.
Experimental evidence of the different behaviour of SAR backscattering as a function of the wavelength can be better appreciated by looking at the boxplots reported in Figure 8 and Figure 9 for the co-polarized and cross-polarized channel, respectively. They represent the co-polarized and cross-polarized average EM response of forest and land features (as extracted from SLC images) for different available sensors, respectively. In these plots, each box represents the dispersion of data around their median values, depicted by the red bar. The edges indicate the 25th and 75th percentiles, respectively. The whiskers extend to the most extreme data points not considered as outliers, which are plotted individually using the ‘+’ symbol.
From the analysis of reflectivity function for the co-polarized channels, represented in beta nought ( β 0 ) values, it arises that the best separability, as expected, is obtained by using L-band ALOS-PALSAR data, being the span of medians for the forest class and the land class in the order of 7 dB. The C-band Sentinel-1 works quite well, being that span around 5.3 dB. However, contrarily to the case of L-band images, the boxes representative of the EM response of the two classes exhibit a quite significant overlap. This means that there are higher possibilities of interclass confusion if the only reflectivity function is used to classify. A similar situation is registered in the case of usage of X-band TerraSAR-X data. In this case, the span between the medians is even reduced with respect to the C-band (it is in the order of 2.8 dB). Moreover, a wider superimposition between the relevant class boxes is registered. This makes forest studies more challenging at this particular wavelength, with the most successful related to the availability of TerraSAR-X synchronized interferometric tandem acquisitions [156], which also made possible the derivation of a global forest/non-forest map [157].
The rightmost boxes in Figure 8 refer to images acquired by the recently launched NovaSAR platform [158]. In this case, calibrated beta nought ( β 0 ) expressed in dB data can be retrieved from SLC digital numbers (DN) as follows
β 0 = 10 log 10 D N K 1 ,
where K 1 is the calibration constant provided in the product’s metadata. The reader should remember that the β 0 is the projection on the slant range of the sigma nought ( σ 0 ).
The peculiarity of this sensor is that it is the only one today operating at S-band, therefore, with a wavelength of about 10 cm. Due to the intermediate wavelength between the L-band and the C-band, its behaviour concerning the penetration depth into vegetation canopy is intermediate as well. As a consequence, the discrimination of forest stands from land features should work better on such images than the C-band and worse than the L-band [159]. This is reflected from the first available data acquired during the commissioning phase of the platform. The span between the median of the distributions of the forest and land classes is about 4.1 dB, so slightly lower than that registered for the C-band case, but the width in the y-direction of the boxes is lower, i.e., data are more homogeneously distributed around the medians and this causes the lack of any overlap between the boxes. Therefore, there are more possibilities, compared to the usage of the C-band, to successfully separate the two classes from their EM return.
The above considerations are confirmed if the boxplots for the cross-polarized channel reported in Figure 9 are considered. In this case, the span between the average scattering between the forest and the bare land classes increases for all the wavelengths due to the higher sensitivity of the cross-polarized channel to vegetation canopies [125]. In particular, the span for ALOS-PALSAR L-band data increases up to about 10 dB. For C-band Sentinel-1 and X-band TerraSAR-X data it was of about 6 dB. However, as for the co-polarized case, only ALOS-PALSAR data do not show any superimposition between the boxes representative of the classes distributions. The situation is improved for the C-band, being now the boxes superimposition strongly reduced with respect to the co-polarized case. For the X-band, the story is mostly unchanged; therefore, a significant interclass- confusion should be expected whatever the polarization is used. Data about NovaSAR images are not available for the HV polarization at the time of this research.
Image texture is another important parameter in the discrimination of forest patterns from bare land or agricultural ones [160,161] and can be effectively exploited as information layer in change detection algorithms. Forests are in fact characterized by significant texture, which is severely affected by the transformation into other land use.
In Figure 10, an example is shown of the usage of this information for the detection of deforestation patterns in the Brazilian Amazon. In particular, in Figure 10a,b, two Google Earth views of the study area acquired in 2007 and in 2010, respectively, are reported. The correspondent Level-1α SAR products [24] are depicted in Figure 10c,d. They share the reference situation for land cover changes evaluation, which has been acquired by the ALOS-PALSAR sensor in April 2007. It is loaded on the red band. The images useful for the detection of changes have been acquired in July 2007 (see Figure 10c) and September 2010 (see Figure 10d). They are loaded on the green band. The blue band is reserved to a texture measure calculated based on the last-mentioned acquisitions.
As explained in Section 2, changing patterns are identified by red or green shades, indicating, respectively, a decrease or an increase of the backscattering with respect to the reference situation. The product depicted in Figure 10c, involving two acquisitions made with short temporal baseline, does not exhibit any significant pattern having one of the aforementioned colours. This means that the scene is mostly stable. Conversely, it is possible to appreciate the rising of those patterns in the representation of Figure 10d, with the red one corresponding to the deforestation areas visible in Figure 10b. This is due to a double drop of the reflectivity and of the texture caused by the removal of the vegetation canopy with consequent exposition of smoother soil.
These phenomena can be better appreciated by computing the relevant normalized multi-temporal band ratios from the RGB images [53], as shown in Figure 10e for the reflectivity and in Figure 10f for the texture. It is evident that the brightest areas on these pictures correspond to the red ones in the panel above.
In Figure 11, the same experiment has been implemented using Sentinel-1 GRD images. In particular, in Figure 11a, the time zero situation is represented as seen from a Landsat 8 acquisition in April 2016. The state of the same area in November 2019 is depicted in Figure 11b. The yellow place marks indicate some deforestation areas.
In Figure 11c, a Level-1α product having as reference situation an acquisition made in December 2015 (red band) and as a test image for the identification of changes (green band) an image acquired in February 2016 is represented. In Figure 11d, the test image has been acquired in January 2020. For both products, like in the previously discussed ALOS-PALSAR representations, the blue band is reserved to a texture measure.
As for the ALOS-PALSAR case, deforestation patterns visible in Figure 11b correspond with areas rendered in red in Figure 11d. However, more generally, this is the appearance of any event of vegetation canopy removal, like harvesting (see red patterns in Figure 11c) that, electromagnetically, is equivalent to deforestation. This means that the discrimination between changing agricultural farmlands and actual deforestation should be carefully taken into account and can be source of misclassifications.
In Figure 12, the comparison between the discriminability of deforestation patterns from undisturbed forest and agricultural fields using the multi-temporal RGB representations of ALOS-PALSAR (see Figure 12a) and Sentinel-1 (see Figure 12b) data previously discussed is reported. Data are shown in the plane defined by the normalized backscattering and texture spans, as extracted from the Level-1α products (see Figure 10e,f for the ALOS-PALSAR case).
The scatter plots tell that both sensors allow for a good separation of the deforestation class from the undisturbed forest, as also shown by the boxplots of Figure 8 and Figure 9, with the latter feature exhibiting very stable EM characteristics (especially at L-band) being characterized by a population poorly scattered around the origin of the plane, which means no changes. The L-band allows for a better discrimination with respect to agricultural lands subject to vegetation canopy removal or harvesting with respect to time zero. In this case, the corresponding population is more segregated and closer to the no variations area with respect to the C-band case, for which a higher inter-class confusion is registered. Moreover, ALOS-PALSAR images exhibit a better sensitivity to the texture drop, being the slope of the regression line for the deforestation class of about 45° versus the almost 52° of the one relevant to Sentinel-1, for which the dominant effect is that of the drop of the backscattering.
The wavelength also affects the estimation of vegetation parameters like the biomass content within the resolution cell since it mainly rules the saturation level of the SAR signal. Reference [162] assessed the performance of three methodologies for its estimation starting from both L-band ALOS-PALSAR and X-band TerraSAR-X data and concluded that artificial neural networks were the best performing, provided that a significant number of ground data is available. An interesting study made by Yu and Saatchi [163] on moist forests in the Americas, Africa and Southeast Asia, as well as temperate conifer forests, reported a good sensitivity of the ALOS-PALSAR HV channel backscattering up to about 100 t/ha and severe signal saturation phenomena (i.e., constant or decreasing EM return) when the AGB increases above 200 t/ha in tropical moist forests. This phenomenon has been reported also in [164]. Similar conclusions were derived in [165] for forests in the North-eastern United states. Reference [166] reported that, in the case of Australian Queensland forests, the L-band HV backscattering is more sensitive to the AGB (up to about 270 t/ha) than the HH one, which is also strongly influenced by the increasing soil moisture which causes a drop of the saturation level from about 170 t/ha to about 100 t/ha with respect to dry conditions. In case of availability of fully polarimetric data, Neuman et al. [167] claimed that the best correlation with AGB is given by from the second Pauli component HH–VV, which is related to ground–trunk and ground–branches scattering.
Working with C-band images, the sensitivity to the AGB significantly decreases with respect to the L-band due to earlier saturation of the signal. As an example, Reference [168] reported, in tropical environment, a sensitivity of about 30 t/ha, 70 t/ha, and 60 t/ha for ERS-1 VV, SIR-C HH and SIR-C HV data, respectively. In these cases, one of the strategies to cope with this is multi-sensor data fusion. Reference [169] claimed an increased sensitivity in tropical environment up to 250 t/ha when Sentinel-1 data are combined with ALOS-PALSAR-2 images. As found in [170], the integration of Sentinel-2 optical data can bring the sensitivity of the estimate up to 400 t/ha, which is a value that can allow the study of most of the forests in the Mediterranean area.
The exploitation of the S-band seems to be promising for AGB estimation [171]. Preliminary studies made with airborne images acquired in view of the launch of the NovaSAR mission [158] claimed that the relation between the EM return of a broadleaved UK forest and the AGB follows a non-linear model up to about 150 t/ha at all polarizations, while lower sensitivity has been observed for needle leaved stands [172]. The authors related this phenomenon to the higher density of foliage and branches with respect to broadleaved stands. Interestingly, this work also used S-band signal simulations to decompose the various structural contributions within the total backscattered energy from the forest stands. Simulations have been implemented by using the Michigan microwave canopy scattering (MIMICS) model [173]. They revealed that, as an example, in case of broadleaved canopy, the co-polarized backscattering is dominated by ground/trunk interaction at HH polarization and by volume scattering at VV polarization. Direct crown scattering is the dominating mechanism for the cross-polarized channel [172]. These findings are similar to those claimed in [174,175], who differentiated clear-cut areas from forest stands using S-band data acquired at the beginning of the 90s of the past century by the Russian platform Almaz.
From the literature, it arises that the saturation level of the SAR signal as a function of the AGB has been variously reported, with variations that can be significant [176] and anyway justifiable due to the strong influence of the forest type and structure on the estimate. Variability in the relationship between the SAR EM return and the AGB has been also observed due to acquisition parameters, like the incidence angle [177], and environmental factors such as rainfalls [166] or freeze-thaw cycles [178]. The most reliable values for the L-band and P-band sensitivity to the AGB (without any data fusion) are in the order of 60–100 t/ha and 100–150 t/ha, respectively, and this raises the perception that SAR data have limited capability in AGB estimation, especially in case of high biomass forests [166].
The topic of SAR sensitivity to the AGB is particularly relevant with regard to forest fires detection [179]. With about 4 million km2 burned globally every year [180], they represent today one of the most serious environmental threats.
The impact of fires on the backscattering coefficient was found to be ambiguous by the literature. They induce variations of the EM return that mostly depend on the forest structure and moisture, as well as from the humidity of the underneath soil [51,181]. Combustion reduces the number of vegetation scattering elements, and this is expected to reduce the backscattering coefficient as well. A strong decrease of the backscattering was registered in tropical environment under dry conditions using C-band VV polarized data due to decreased volume scattering caused by burned canopy and increased heat flux leading to a dryer soil [182,183]. However, the lack of the vegetation canopy may also increase the scattering contribution of the ground [184,185]. This makes the scattering from the fire scar affected by weather conditions, since, as an example, after rainfalls, the total backscattered energy can increase due to the increased moisture of the soil exposed by the fire, thus complicating the discrimination from the unburned surrounding stands [182]. This behavior has been observed also in boreal forests using C-band co-polarized images [186,187,188].
Reference [189] found a drop in the backscattering for the C- and L-band cross-polarized channels in fire affected areas with respect to adjacent unburned forest in temperate environment. Conversely, in the Australian savanna, Menges et al. [190] registered an increase and a decrease of L-band co-polarized and cross-polarized backscattering, respectively. In Mediterranean forests, under dry conditions, Imperatore et al. [191] reported a decrease of the EM return for both VV and VH polarization using Sentinel-1 data, with the effect more pronounced in the cross-polarized channel. In the same environment, Reference [192] studied the effect of forest fires on the X-, C- and L-band backscattering and reported, under dry conditions, an increase of the EM return with increasing fire severity for the X- and C-band co-polarized components and a decrease for the L-band one. The cross-polarized channel, instead, which is more sensitive to volume scattering, was found to be decreasing at all frequencies. A conflicting result with respect to the findings of Imperatore et al. was also reported by [193], which claimed an increase of the post-fire backscattering coefficient for C-band co-polarized ERS-2 data independently on the accumulated precipitation index.
In summary, SAR remote sensing of forest fires triggers a complex phenomenology which can generate a wide range of backscattering behaviors, varying as a function of wavelength, acquisition parameters, meteorological conditions and topography [192]. Mostly, the literature agrees that, under dry conditions, the backscattering from the burned forest decreases. Conversely, under wet conditions, an increase of the EM return should be expected due to the higher contribution of the soil exposed following the loss of the vegetation canopy. This suggests that the analysis of fire scars using SAR data should be always coupled with data on the rainfalls in the study area [191].
In Table 2, the reviewed literature about forestry has been summarized and categorized based on the application, the adopted methodology and the data exploited. More considerations about it are provided in Section 5.

4.2. Water Resources

Water is the primary natural resource for all the living beings on Earth and is under pressure. The UN estimated that more than 2 billion people live in countries experiencing high water stress [194]. Global water demand is expected to continue increasing, mostly due to an increasing demand from developing countries, at about 1% rate yearly until 2050 accounting for an increase of 20% to 30% of the actual lever of water use [195].
As estimated by the UN, about 90% of all the natural disasters are water-related. Floods are the most frequent and the more dangerous among them, but on the other hand, between 1995 and 2015, droughts accounted for 5% of worldwide natural disasters, affecting 1.1 billion people, killing 22,000 more, and causing USD 100 billion in damages [194].
Drought is a recurring phenomenon that affected civilization throughout history. The American Meteorological Society grouped drought types in four inter-related categories: meteorological, agricultural, hydrological and socioeconomic [196]. The meteorological drought is caused by a prolonged absence of precipitations. The agricultural one occurs at a critical time during the growing season of crops and can be exacerbated by reduced rainfalls that can also affect subsurface water supply, reducing stream flows, groundwater, reservoirs and lakes levels, thus resulting in hydrological drought. Finally, the socioeconomic drought associates the supply and demand of goods with weather conditions, which basically rule all the categories. As discussed in [196], the insurgence of drought can be detected using ad hoc indices calculated based on temporal temperature and rainfall statistics, as well as soil moisture, vegetation indices and reservoir storage parameters.
Surface water bodies are dynamic objects as they shrink, expand and/or change their appearance during time due to both natural and human-induced factors [197]. Changes in surface water volume can cause serious consequences. Its increase can lead to floods (see Section 4.3). Its decrease is a trigger for drought. Therefore, it is crucial to effectively detect the presence of surface water and to estimate its extent, its volume, and to monitor its dynamics [198]. This is even more important in areas difficult to access and/or characterized by chronical scarcity of water availability due to extreme climate conditions [199].
Remote sensing technologies offer unprecedent possibilities to observe surface water dynamics being much more effective and cheaper compared with traditional in-situ measurements. They constitute an effective tool for the large-scale estimation of the quantities required to calculate the drought index introduced above. This is mostly done using passive remote sensing [200,201] due to the availability of long time series of data and to the ease with which it is possible to extract relevant vegetation and water parameters from them using spectral indices.
Among them, the extension of water bodies is particularly important. As mentioned, it can be effectively estimated using MS data [202]. Recently, Donchys et al. [203] presented a global map of surface water changes between 1985 and 2016 obtained by processing images from multiple Landsat mission using the Google Earth Engine [204]. This product offers an extraordinary picture of the water/land and land/water conversions occurred in the last 30 years, but it is not able to catch rapid changes (i.e., seasonal changes) due to the cloud coverage preventing the MS imaging of the Earth surface all over the year.
The continuous monitoring of the available water is very important, as an example, in semi-arid environments, where the alternation between wet and dry seasons causes a natural oscillation of the water volume retained by dams. However, this cycle has to be monitored because its instability, with consequent reduction of the water availability for agricultural use, human consumption and livestock, can set off severe droughts which, in turn, could be the trigger for a Malthusian crisis [205], i.e., a situation in which the population in a given area exceeds its food supply.
In this context, the all-weather and all-time imaging capabilities of SAR sensors are crucial, and the literature demonstrated their usefulness for the purpose [53,82,206,207,208,209,210,211,212]. However, when dealing with small reservoirs, i.e., those having extension smaller than 100 ha, the medium resolution is a limitation [210]. These reservoirs are often self-built using rudimental constructions techniques and self-managed by local communities for which constitute a fundamental resource. Their monitoring requires the usage of high-resolution data which are an enabling factor for several interesting applications derived from their mapping [82].
This can be implemented using the techniques used for the identification of standing water. Reference [82] applied standard local thresholding for mapping small reservoirs in semi-arid Burkina Faso exploiting the characteristic bi-modal distribution of COSMO-SkyMed X-band SAR data within an area of few square kilometers containing the basin. Global thresholding was exploited in [213] for mapping reservoirs using TerraSAR-X X-band images acquired in semi-arid Northeastern Brazil. Eilander et al. [212] developed a new Bayesian classifier and applied it to C-band RADARSAT-2 images to map reservoirs in the Upper East Region of Ghana. Reference [53] used bi-temporal Level-1α images to define a normalized difference water index based on the comparison of the two images composing the RGB frame. An example of this workflow is shown in Figure 13.
In particular, in Figure 13a, a Level-1α product is reported depicting a small reservoir in Burkina Faso. The rationale behind has been introduced in Section 3. The reference image has been acquired at the peak of the dry season by the COSMO-SkyMed platform with 3 m spatial resolution and it is loaded on the blue band. The test image for the identification of changes has been acquired during the wet season, and it is assigned to the green band. This makes temporary water surfaces to be rendered in blue color. In Figure 13b, the map of the water index is shown. The reader should note that its highest response is in correspondence with the basin area identifiable in Figure 13a. Finally, Figure 13 represents the surface water mask obtained via thresholding. The visual inspection reveals a good agreement with the information retrievable from the RGB image with limited insurgence of false alarms. This problem can be tackled using object-based processing exploiting geometric properties of the reservoirs as detailed in [77].
The reviewed literature highlighted several aspects. First, most of it proposed the exploitation of high-resolution images in order to cope with the dimension of the objects to be identified. Second, the usage of data acquired with short wavelengths, i.e., at X- or C-band frequencies, in the co-polarized channel, is privileged. This is due to the dependencies of SAR backscattering on both polarization and roughness.
Working with microwaves, water bodies are considered as smooth surfaces compared to bare (or vegetated) soils. However, strictly speaking, the Rayleigh criterion holds, i.e., surfaces are smooth if h < λ / 8 cos τ , where h is the standard deviation of the surface height, τ the depression angle and λ the wavelength [125]. A surface with the same roughness will appear rougher (i.e., will exhibit a higher backscattering) at shorter wavelengths than at long ones [214]. In other words, soils tend to appear brighter using short wavelengths, and this tends to maximize the contrast with respect to standing water surfaces, which are usually very smooth.
Such an effect can be appreciated in Figure 14, where the ratio R between the backscattering coefficients of a rough terrain and a water surface is simulated as a function of the incidence angle θ.
Simulations have been implemented using the small perturbation method [215] for different wavelengths, namely, those corresponding to X-band, C-band, S-band and L-band. They are represented in Figure 14 by the curves depicted in blue, red, green and orange colors, respectively. The surface roughness has been described in terms of the fractal parameters H (the Hurst coefficient) and s (the standard deviation at unitary distance). The interested reader can refer to [215] for more information about their physical meaning. The geometric and dielectric parameters used to characterize rough terrain and water surface are presented in Table 3, where electromagnetic parameters are indicated with ε (dielectric permittivity) and σ (conductivity).
The results of the simulation shown in Figure 14 confirm that the discrimination between terrain and water increases with the frequency and spans from about 9.6 dB at L band to about 10.5 dB at X band at 15° incidence angle. For all the examined wavelengths, the separability slightly increases with the incidence angle.
As for the dependence of surface scattering from polarization, it is worthwhile to recall that a linear cross polarization response results when the transmitted wave is re-polarized to its orthogonal polarization. Re-polarization from horizontal to vertical or from vertical to horizontal can happen in presence of multiple scattering, so at least two bounces. Smooth surfaces are dominated by single-bounce forward scattering with a very low cross-polarized component, which is comparable to the noise floor [216]. This means that the slightly higher roughness of a bare soil with respect to standing water tends to be flattened using the cross-polarized channel and enhanced through a higher contrast by exploiting the co-polarized one.
The results of the simulations are confirmed by considering real data. In Figure 15, the boxplots representative of the EM response of land and water features for different available sensors are reported. Using L-band (~27 cm wavelength) ALOS-PALSAR data, the span between the median response of the two features is around 4.3 dB, but the relevant boxes exhibit an overlap, which means a significant probability of class confusion. In the case of C-band (about 5 cm wavelength) Sentinel-1 data, the span is of about 7.3 dB, without any overlap between the boxes. A further decrease of the wavelength up to the X-band (3 cm) brings the difference in the median span between the two classes up to 10.5 dB. NovaSAR data (acquired at S-band, thus with about 10 cm wavelength) show a separation between the medians of about 8 dB, thus comparable to that achieved using Sentinel-1 images, with absence of boxes overlap as well.
The knowledge of the extension (and of the position) of the reservoirs opens the doors to several multidisciplinary applications spanning from hydrology to health sciences. As an example, the duration of the presence of surface water and the retention capacity of the reservoirs are key parameters for modelling the diffusion of water-related diseases like schistosomiasis because they offer a favourable environment for the proliferation of the snails responsible of its transmission [217]. In semi-arid environments, where most of the reservoirs are completely dry at the peak of the dry season, it is possible to estimate the water volume using SAR interferometry [82]. The absence of surface water in the basin areas causes the soil to be exposed to the EM signal, and this allows to retrieve relevant information on its phase that is exploited to produce digital elevation models (DEMs) using repeated acquisitions made with short temporal gap [218].
Data resolution is fundamental in this application, that being of the freely available DEMs with global coverage, i.e., SRTM (90 m resolution) and ASTER (30 m resolution) not compatible with the average dimension of the reservoirs to be analysed [208]. The Global Tandem-X DEM with 12 m spatial resolution (available at free of charge only for research purposes subject to the approval of a scientific proposal and the payment of a service fee) would fit with the dimension of the objects to be monitored. However, all the available elevation data provide static pictures of the scene, while in case of the monitoring of small reservoirs, their dynamic evolution is more interesting, as they are subject to fast sedimentation rates.
To this end, Reference [208] exploited high-resolution COSMO-SkyMed images acquired with 3 m spatial resolution with one day time gap at the peak of the dry season to produce a DEM of an area of about 1600 km2 in semi-arid Burkina Faso with 9 m resolution. By coupling high-resolution elevation data with temporal surface water masks, it is possible to estimate reservoirs capacity and their sedimentation rate and to derive semi-empirical relationships relating their surface area to their retained water volume.
The estimation of the reservoirs bathymetry starting from elevation data can be implemented, as suggested by [82], superimposing the shorelines retrieved from SAR observations to the DEM and calculating the height of the contour h c . Each point p belonging to the reservoir area is considered as a water column whose height h p is equal to h p = h c h D E M , where h D E M is the DEM elevation at that point. Thus, the capacity relevant to the point p will be v p = h p S p , where S p is the surface element associated with the point p, corresponding to the DEM resolution. Finally, the capacity V of the reservoir can be estimated by integrating the elementary volume all over the reservoir mask, i.e.,
V =   v p d S = N i = 1 v p i [ m 3 ]
where N is the total number of samples belonging to the surface water mask of a particular reservoir.
In Figure 16a, the statistics for the Laaba basin (Burkina Faso, see Figure 13) retrieved by applying the above-described methodology to a set of images acquired between June 2010 and December 2011 are reported. In particular, the upper frame of the picture reports the surface trend, while the lower one the corresponding estimated capacity. The application deals with very small objects. In this case of the Laaba basin, the maximum estimated extension was around 400,000 m2. It is also interesting to remark that, at the peak of the dry season, between the months of March and April, the reservoir disappears due to the high evapotranspiration rates characteristic of semi-arid Sahel.
In Figure 16b, the discrete SAR-derived volume estimation is compared with continuous data obtained using the soil conservation service model [219], which is an uncalibrated model suitable for the simulation of the runoff in small ungauged watersheds. The reader should appreciate the good agreement between the two methodologies that, as observed in [82] could work synergically to provide better results in critical situations characterized by strong non-linearity of the water availability trend due to strong precipitations or long drought. In these cases, SAR observations can serve as calibration points to improve the estimates obtained through hydrological models.
By applying Equation (2) to several reservoirs, it is possible to derive expressions relating their surface area and capacity through regression. This was done in [82] using data derived exclusively from SAR observations over Burkina Faso. Other examples of the derivation of such a relation have been provided in [220] for Brazil using only SAR data, in [221] using multi-source remote sensing data acquired over India and in [222] through an extensive bathymetric survey in the Upper East Region of Ghana. Whatever the derivation methodology, it is easy to understand the usefulness of a relation V = a A b , where V is the volume, A the surface area and a and b two regression coefficients, allowing for the estimation of the capacity of a reservoir starting from the knowledge of its extension, with the latter information easily obtainable from active or passive remote sensing. Therefore, these relations constitute a very powerful tool for the temporal monitoring of water availability in scarcely accessible areas and for the estimation of one of the parameters required for the calculation of the drought indices introduced at the beginning of the Section.
It is worthwhile to remark that the relations between the reservoirs’ surface area and retained volume have a validity limited to areas sharing the same geomorphological characteristics, in which it is feasible that reservoirs have comparable bathymetric profiles. That is the reason why different geographic areas have different relations. Moreover, the time variable plays an important role. Semi-arid environments are typically characterized by high sedimentation rates [223], and this, together with the rudimental construction techniques adopted for building reservoirs and the lack of a structured management, can severely affect the storage capacity of the tanks in a relatively short time [224]. As an example, Reference [208] reported that many reservoirs in the Yatenga region in Burkina Faso lost more than 50% of their original capacity in a time frame of 25 years. Reference [225] exploited Sentinel-1 SAR images and elevation data to study the Ghataprabha reservoir (India) and estimated a sedimentation rate of about 4 Mm3/year since its construction made in 1974. The continuous monitoring of this phenomenon is therefore essential for an effective management of water resources, especially in areas characterized by chronical water scarcity.

4.3. Flood Mapping

The UN estimated that about 90% of all the natural disasters are water related. In particular, in the period 1998–2017, floods accounted for more than 43% of the total, affecting 2 billion of people and causing more than 142,000 victims and hundreds of billions of dollars in damages [226]. These numbers (see also Figure 17) should help the reader understand the key importance of monitoring and forecasting activities in order to increase the response capacity and resilience to these kinds of events. Satellite remote sensing has been widely exploited for these purposes [80,227], with SAR sensors playing a central role due to their independency from illumination and weather conditions [2] and the sensitivity to surface roughness, which is a key parameter in the detection of standing water [228].
In general, a flood event can be seen as the composition of different phases, as shown in Figure 18, which reports the generic cycle of a hazard. Satellite remote sensing in general, and SAR observations in particular, can significantly contribute to many of them. As an example, a synoptic picture of an area hit by a flood event can be very useful for damage assessment, providing important information for insurance companies [229]. Geomorphological information [230] can contribute within warning systems [227,231] in which radar-derived flood maps are assimilated into hydrological and weather models to improve forecasts.
The mapping activity, in which SAR becomes crucial due to its imaging characteristics for near real time provisioning, is fundamental in the response phase [80]. For this purpose, some operational services already exist [232], but the research community is very active on the topic, also thanks to the availability of many validation data provided by the European Commission through the Copernicus Emergency Management Service. Quality approved flood maps in vector format for several events can be found at https://emergency.copernicus.eu/mapping (accessed on 1 February 2020).
Flood mapping using SAR is usually faced with change detection techniques ranging from simple comparison operators, like the difference operator [233] or the log-ratio [234], to more refined methods aimed at strengthening automatic thresholding algorithms using local image statistics [235,236]. Usually, when simple comparison operators are used, the segmentation is supervised or semi-supervised. In these cases, visual interpretation of an expert operator can be powered and made more efficient by the innovative higher level data representations introduced in Section 3 [5,24,86]. An example of this is provided in Figure 19, which shows the evolution of the flooded area in the rice cultivations of the Albufera natural reserve (Spain). The purpose in this case is to highlight the presence of standing water and to provide a quick understanding of the scene land cover through a balanced colour coding as much as possible close to the natural colour palette, which is the one the operator is used to. This can be achieved by a combination of a pre-event and a post-event image. As suggested in [5], the pre-event image can be conveniently loaded on the blue band and the post-event image on the green one. The third channel (the red) can be reserved to the interferometric coherence or to a texture measure.
The images displayed in Figure 19 derive from Sentinel-1 ground range detected products having 10 m spatial resolution. An acquisition made on 6 April 2017 with completely dry landscape has been considered as pre-event image. Post-event images have been acquired on 18 April 2017 (Figure 19a), 4 August 2017 (Figure 19b) and 14 December 2017 (Figure 19c). In the first case, the pre- and post-event images are acquired with short time span. The landscape is mainly unchanged, and as a consequence, the dominant colour is the one representative of a balance of the electromagnetic response of the two scenes, i.e., a balance of the blue and green channels (Figure 19a).
In the other two cases, see Figure 19b,c, rice crops are flooded. The power of this representation is given by its immediate interpretation following, in which the observer can readily associate the flood with the blue area surrounding the lake [5]. This is due to the dominance of the terrain response of the pre-event image with respect to that of the post-event one, in which the smooth water layer covers a rougher terrain causing a drop of the backscattered signal.
Beyond being helpful for visual interpretation of the SAR response of a flood event, these images can be the starting point for mapping. The simplest solution is the exploitation of a multi-temporal normalized image ratio [53,54] to be segmented with standard thresholding. However, as discussed in Section 2, this operation can be critical, especially when unsupervised, since the performance of the algorithms can be subject to significant variations for small oscillations of its value. This can be avoided with semantic clustering [77], Bayesian network fusion [237,238,239] or fuzzy logic principles [95,240], which are also useful for the combination of heterogeneous multisource data [241]. Another fusion schema is given by the usage of a MS image whenever acquired before the event to train, exploiting the identified permanent water features, a supervised classifier acting on the post-event SAR one [242]. More recently, some deep architectures have been developed for the purpose [243,244,245], even if their use on SAR is still rather limited by the scarce availability of relevant training samples.
In urban areas, flood detection is more challenging due to the complex backscattering mechanisms triggered by the presence of buildings causing double-bounce scattering [246] which is, in principle, enhanced by the presence of a layer of standing water [237]. However, as demonstrated in [247], this is a function of the aspect angle i.e., the angle between the orientation of the wall and the SAR azimuth direction, with the enhancement that can significantly drop as the aspect angle increases. The floodwater level also plays a role, being the double-bounce enhancement dropped as the water level increases [248].
In order to overcome these difficulties related to the incoherent detection of floodwater in urban areas, the interferometric coherence can be exploited as a source of additional information [247,249,250,251]. Undisturbed urban areas are stable with respect to phase, i.e., they are characterized by high interferometric coherence. The presence of water in the urban environment causes changes in the spatial distribution of scatterers within a resolution cell, resulting in a coherence drop which can be identified by calculating the coherence difference between two couples sharing the same pre-event reference (or master) image, the first one built with another pre-event image, the second one considering a post event acquisition.
What arises from the literature is that the mapping, even unsupervised, of flooded areas using SAR data has reached very high performance and robustness. Therefore, future challenges include (i) a better assimilation of SAR data into hydrological [252] and weather models, in order to improve the contribution of SAR in the other phases of the cycle depicted in Figure 18, and (ii) the generation of products adding value to the flood maps (like the water level [248,253] or the real-time estimate of the affected population) or taking value from them, such as the case of flood vulnerability maps [254].
In Table 4, the reviewed literature about water resources has been summarized and categorized based on the application, the adopted methodology and the data exploited. More comments and considerations are provided in Section 5.

5. Discussion

Today, Earth observation (EO) platforms are collecting more than 4 million km2 images of the Earth surface per-day. The diffusion of EO data is creating an incredible opportunity for the launch of new downstream applications and services, which is powered by the new possibilities for big data processing and analysis offered by innovation in cloud computing and artificial intelligence. This new trend allows for capturing higher knowledge from EO data to be exploited in new vertical applications concerning environmental monitoring, disaster relief, precision farming, mining and maritime surveillance.
The European Commission estimated that, in 2017, the global EO economy accounted for about EUR 10 billion, divided between the sales of EO platforms (the upstream section of the supply chain), and the data segment, i.e., their acquisition, processing and transformation into information products for end users (the downstream section). The market is mostly driven by the upstream segment, which constitutes about 70% of the total revenues. The downstream market is estimated to be around EUR 3 billion and is mainly driven by governmental applications, which represent between 50% and 60% of the revenues [255].
However, in order to accomplish further growth, especially in the private sector, some strategic problems need to be solved. The major issues pointed out by a recent ESA survey [256] are the cost of raw data, the market/user acceptance, and the lack of budget to acquire services (see Figure 20). The launch of the Copernicus program is mitigating all these problems. The free and open data policy tackles the problem of the cost of mid-resolution data and at the same time pushes the provider of high-resolution images to lower the cost of their products. This allows EO companies to offer more convenient prices for their services.
Indeed, the Copernicus program is changing the market not only with its data policy but also through a dedicated global campaign to raise the awareness of the usefulness of satellite technologies. The EO market is greatly benefitting from this campaign, as more companies and customers are recognizing the advantages of using EO data. However, as shown by data reported in Table 5, the cost of high-resolution images, especially those having sub-metric resolution, is still high and this affects the commercial development of applications requiring this kind of data. As an example, the cost per square kilometer of a staring spotlight TerraSAR-X archive image with 25 cm resolution (which is the best today available from space) is about EUR 230, while a multispectral WorldView-4 image with 30 cm resolution in the panchromatic channel has a cost per square kilometer of less than EUR 30 [257].
In the previous Section, the suitability of time-series of SAR observations with many applications has been highlighted through a deep revision of the reference literature on those topics, which are all related to change-detection approaches, whose review has been provided in Section 2 and Section 3. It highlighted that most of the literature relies on thresholding of an opportunely selected change layer [48]. However, the underlying weakness of this approach is in the concept of the threshold itself, which is intrinsically not robust. As an example, the reader can consider a situation in which there are more than one change in the scene, like a flood in an agricultural area. At microwaves, the areas interested by canopy removal (due to harvesting in this case) or affected by flood are both characterized by a decrease of the backscattering coefficient due to, in the first case, the lack of volumetric scattering contributions from the vegetation canopies and, in the second, to the change in surface roughness caused by the presence of a thick water layer covering the soil. Therefore, if the objective is the identification of changes in crops, then two thresholds are necessary, one isolating the changed pixels from the unchanged background and another segmenting agricultural land from flooded areas. Most of the analyzed literature cannot cope with this problem, since thresholding is typically blind with respect to the type of changes.
Figure 20. Barriers to the diffusion of EO data (source [258]).
Figure 20. Barriers to the diffusion of EO data (source [258]).
Remotesensing 13 00604 g020
The fusion approach for change detection is generally successful in the identification of different types of changes on the same scene. This is due to the introduction of an information layer (the MS one) carrying a spectral diversity allowing for the discrimination of different surfaces at the biochemical level rather than that of the surface roughness, which is the domain of SAR. However, from the analysis of the relevant literature, some weaknesses, like automation, uncertainty of the information and general complexity of the workflow, arose. As an example, an approach like that proposed in [73] requires a strong supervision and this makes it applicable only to small scale problems. The uncertainty of the information is related to the fact that, like in [74], the main source of the information is the MS image, whose availability is subject to favourable weather and illumination conditions. The general complexity of the processing chain can be due to several factors, like the number of sources, their typology, the number of information layers and processing steps needed for reaching the results, the computational time, etc. A methodology like that described in [71], for instance, requires the availability of MS panchromatic images, which are not available from many platforms operative today, especially the low-cost ones. Moreover, the authors proposed to use five features, of which just one extracted from the SAR image. This means, again, that the MS acquisition is the main source of information.
In Table 1, literature on SAR change detection approaches reviewed in Section 2 and Section 3 has been categorized based on the methodology developed and the application addressed. As discussed in Section 1, the philosophy behind this can be of two types. The classic one aims at filling the information gap from sensory data to the geophysical parameter required with an algorithm excluding the user from the information process. More recently, some higher-level data representations have been introduced to address change detection problems, with the objective to favour the active participation of the analyst in the information process, which is powered by his/her enhanced analytical capability thanks to an improved interaction with data.
It is interesting to remark that methods using microwave imagery are only quite general and can be used to detect a generic changing pattern, whose semantics is assigned offline by the operator or pre-established by the parameter set-up. When object-based processing and/or multi-sensor data fusion is exploited, the techniques are more application-oriented, since the features being selected, the decisions being taken and/or the object parameters being estimated depend on the particular change to be identified.
The reviewed higher-level representations also can be categorized in application-oriented and general-purpose ones. In the first case, the objective is the enhancement of a specific changing pattern. In the second, the aim is the maximization of the information mining of the observer favouring the identification of several land cover classes beyond that constituting the objective of the application.
In Section 4, the general change detection principles have been declined in application-oriented environments relevant to forestry (see Section 4.1) and water resources, with particular focus on semi-arid regions and floods (see Section 4.2 and Section 4.3, respectively). The general impression is that the contribution of SAR data processing in these applications is significant, especially when the detection of standing water is involved, due to the peculiar scattering mechanisms triggered by microwaves, making smooth surfaces highly distinguishable from rougher terrains [125]. Applications involving vegetation are generally more challenging due to the complex scattering mechanisms triggered by the canopies [96] and mostly rely on a project-based approach, with scarce operational perspectives.
In Table 2 and Table 4, the reviewed literature about forestry and water resources has been categorized based on the application, the adopted methodology and data exploited. In case of flood mapping, the landscape is quite consolidated from the viewpoint of data in use because all the examined literature used images acquired with short wavelength (i.e., at X- or C-band frequencies) in the co-polarized channel. The picture is more variegated from the side of the methodologies, ranging from classic thresholding to more recent deep learning ones, with all of them claiming outstanding results. The lack of benchmarks universally recognized by the community prevents the assessment of the benefits and the drawbacks of the different approaches. Actually, this is a limitation of the community. In the MS one, there are examples of benchmarks like the Prague Texture Segmentation Data generator and Benchmark [258] or the Data Fusion Contest organized by the IEEE Geoscience and Remote Sensing Society [259,260]. They are useful to understand the characteristics of the different methodologies in a standardized environment providing datasets, metrics and consistency measures to assess the performance of each of them. In the SAR community, this is not a common practice. An attempt has been made in the case of despeckling algorithms [36], but it is a quite isolated case, and this makes the full comparison of the literature difficult.
However, the case of flood mapping represents one of the most consolidated in SAR remote sensing with successful implementation in industrial contexts [232]. The major challenges for a full commercial development of the technology are related to the enrichment of the maps extracted using literature methods with information that can be attractive for a wider audience of customers beyond the public administration. As an example, the assimilation of flood maps within hydrodynamic and geomorphological models can lead to the production of flood vulnerability indices needed by insurers to better calibrate the premiums of their customers [229]. This means that, although the literature highlights a full maturity of the remote sensing problem, further efforts are needed to spread the technology in the user segment through the development of new vertical applications.
SAR remote sensing of forests is mostly related to the exploitation of long wavelengths, basically L-band data. In this context, the most successful application is deforestation detection, which is a problem that can be approached with consolidated change detection techniques, eventually powered by the exploitation of multisource data. The other topics covered in this review showed more limitations. The literature on the classification of forest types, as an example, is sometimes contradictory. Some authors reported that C-band backscattering is particularly affected by the different branching geometry and foliage of the different species. Other works reported better results obtained using the L-band. This disagreement suggests that the choice the most appropriate band depends on the examined forest, being its structure widely variable with age, management and biome. However, the most relevant works on the topic go back to the ‘90s of the past century and this is a symptom that the SAR community is not paying sufficient attention on it anymore being this application much more easily addressed using MS data [261].
Forest fires detection and biomass estimation are indeed much more actual topics on which SAR data demonstrated their effectiveness. In both the cases, the literature mostly agreed that L-band cross-polarized data are the most effective, exhibiting a good sensitivity to the AGB (in the order of 200 t/ha). They are also helpful in increasing the contrast of fire scars with respect to undisturbed forest stands, at least in dry conditions. However, state-of-the-art methodologies still require strong supervision of expert operators and appear quite far from a large-scale operational implementation.
Remote sensing of water resources is probably the field in which SAR data can contribute more compared to the actual efforts of the community, which are probably insufficient. The literature about reservoirs monitoring in arid and semi-arid environments is quite poor as well as the interaction with the hydrologists community, in which the penetration of microwave images is very limited.
This application is helpful to highlight one of the principal barriers preventing the diffusion of SAR data in multidisciplinary contexts, despite the increased data availability provided by the Copernicus Programme, the significant advances made by the scientific community at the technical level and by the efforts made to promote their use outside the academy. There is a scarce propensity of the community to recognize the needs of the potential end-user and to adapt its common practices to favor data assimilation within models developed outside of it. As a result, in 2013, before the Copernicus era, the estimated sales of SAR data were in the order of 17% of the total market, mainly polarized towards defense application [262]. Similar figures were reported by a more recent paper by Denis et al. [263] and by market reports produced by the European Commission [255] and private companies [264].
The latter data are reported in Figure 21, which shows how, despite compound annual growth rate (CAGR, i.e., the mean annual growth rate of an investment over a specified period longer than one year) of the two markets is very similar, that of SAR is still a fraction of the optical one. This means that, despite of the benefits brought to the diffusion of SAR data by the Copernicus Programme and the increased number of users approaching them, customers are reluctant to pay to acquire microwave images, thus preferring to direct their money toward the purchase of optical data [264]. The difficulties related to the handling of SAR data often prevent non-expert users to work autonomously, which is indeed possible dealing with MS ones, and this makes the role of SAR experts crucial in the creation of a bridge towards the user community. The examined works on water resources monitoring mostly appeared as success cases of the implementation of such path and demonstrated that the dialogue between different disciplines is possible and desirable to trigger the further growth of data exploitation and the development of new downstream applications.

6. Conclusions

This work provided a deep literature review about multi-temporal synthetic aperture radar data exploitation for Earth observation covering general change detection principles and three applicative contexts like flood mapping, forestry, and management of water resources. The reviewed literature highlighted the usefulness of microwave remote sensing in all the considered topics, although with a different technologic maturity.
Flood mapping appeared as the most consolidated application among those considered. In this context, highly effective techniques have been developed and commercial solutions are available, as well. The most remarkable weakness is probably the lack of products adding value beyond the identification of the flooded areas, which makes synthetic aperture radars practically exploited only in the flood response phase. An advance in the research in this direction, leading to data assimilation into hydrological or weather models to improve forecasts, or the introduction of higher information layers related to the flood extent (i.e., water level maps or estimates of the affected population) should be therefore considered as a priority.
Forestry is a sector in which microwave sensors have been, historically, widely employed due to their all weather and all-time imaging capabilities, which are able to supply data in absence of available multispectral acquisitions. However, the complicated scattering mechanisms involving vegetation canopies hamper the full development of most of the related applications, especially those involving the classification of forest types and the estimate of the above ground biomass. Better results have been reported concerning the mapping of fire scars and deforestation pattern, with the latter application probably representing the most mature and ready to be implemented in industrial environments. The fusion of microwave and multispectral data has been proved to be beneficial in all the examined contexts.
Remote sensing of water resources represents a field in which synthetic aperture radar research can contribute more in the future. The interaction between the microwave remote sensing community and that of hydrologists or hydraulic engineers has proved to be successful but is still quite limited, as well as the penetration of microwave Earth observation technologies in such a multidisciplinary context.
Indeed, the diffusion of synthetic aperture radar data outside their reference scientific community and the industry is probably the most urgent problem to be addressed by its members. This is due to both technical and commercial issues. Many market reports stated that the principal barriers for the spreading of microwave images in the industry are the cost of data and the high expertise needed for their handling. Clearly, the scientific community cannot control the pricing applied by data distributors. However, thanks to the Copernicus Programme, this problem concerns high resolution data (with less than 10 m spatial resolution) and those acquired with long wavelengths (e.g., L-band), since there is no possibility, at the time of this research, to access new acquisitions for free, while some archive data are available for any purpose.
On the other hand, the introduction of more interpretable and user-friendly data representations (useful for detecting landscape changes) and the increasing availability of software for data analysis is a step forward in the direction of the full enabling of synthetic aperture radar data in the industrial world, as well as in the scientific communities of data users. In the latter case, a further boost of the research in data assimilation techniques should be considered as a priority to favor the dialogue between different disciplines and promote the development of new research areas and downstream applications.

Author Contributions

All the authors contributed extensively to the redaction of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors sincerely thank Airbus Defense and Space UK and Catapult SA for providing NovaSAR data from the commissioning phase. This work has been partially funded by the UK Space Agency under the aegis of the project “UKSA IPP–Space Enabled Monitoring of Illegal Gold Mines”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Campbell, J.B.; Wynne, R.H. Introduction to Remote Sensing; The Guilford Press: New York, NY, USA, 2011. [Google Scholar]
  2. Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K.P. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–43. [Google Scholar] [CrossRef]
  3. Srivastava, P.K.; Malhi, R.K.M.; Pandey, P.C.; Akash Anand, P.; Singh, M.K.; Pandey, A.G. Revisiting hyperspectral remote sensing: Origin, processing, applications and way forward. In Hyperspectral Remote Sensing; Pandey, P.C., Srivastava, P.K., Balzter, H., Bhattacharya, B., Petropoulos, G.P., Eds.; Elsevier: Amsterdam, The Netherlands, 2020; pp. 3–21. [Google Scholar]
  4. Giardino, C.; Bresciani, M.; Braga, F.; Fabbretto, A.; Ghirardi, N.; Pepe, M.; Gianinetto, M.; Colombo, R.; Cogliati, S.; Ghebrehiwot, S.; et al. First Evaluation of PRISMA Level 1 Data for Water Applications. Sensors 2020, 20, 4553. [Google Scholar] [CrossRef]
  5. Amitrano, D.; Di Martino, G.; Iodice, A.; Riccio, D.; Ruello, G. A New Framework for SAR Multitemporal Data RGB Representation: Rationale and Products. IEEE Trans. Geosci. Remote Sens. 2015, 53, 117–133. [Google Scholar] [CrossRef]
  6. McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  7. Carlson, T.C.; Ripley, D.A. On the relationship between NDVI, fractional vegetation cover, and leaf area index. Remote Sens. Environ. 1997, 62, 241–252. [Google Scholar] [CrossRef]
  8. Di Martino, G.; Iodice, A. Maritime Surveillance with Synthetic Aperture Radar; IET Digital Library: London, UK, 2020; Available online: https://shop.theiet.org/maritime-surveillance-with-synthetic-aperture-radar (accessed on 1 February 2020).
  9. Lanari, R.; Mora, O.; Manunta, M.; Mallorqui, J.; Berardino, P.; Sansosti, E. A small-baseline approach for investigating deformations on full-resolution differential SAR interferograms. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1377–1386. [Google Scholar] [CrossRef]
  10. Taubenböck, H.; Esch, T.; Felbier, A.; Wiesner, M.; Roth, A.; Dech, S. Monitoring urbanization in mega cities from space. Remote Sens. Environ. 2012, 117, 162–176. [Google Scholar] [CrossRef]
  11. Cecinati, F.; Amitrano, D.; Leoncio, L.B.; Walugendo, E.; Guida, R.; Iervolino, P.; Natarajan, S. Exploitation of ESA and NASA Heritage Remote Sensing Data for Monitoring the Heat Island Evolution in Chennai with the Google Earth Engine. In Proceedings of the IEEE Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 6328–6331. [Google Scholar]
  12. Ban, Y.; Jacob, A. Fusion of Multitemporal Spaceborne SAR and Optical Data for Urban Mapping and Urbanization Monitoring. In Multitemporal Remote Sensing. Remote Sensing and Digital Image Processing; Ban, Y., Ed.; Springer: Berlin/Heidelberg, Germany, 2016; Volume 20, ISBN 978-3-319-47035-1. [Google Scholar]
  13. Bargiel, D.; Herrmann, S. Multi-temporal land-cover classification of agricultural areas in two European regions with high resolution spotlight TerraSAR-X data. Remote Sens. 2011, 3, 859–877. [Google Scholar] [CrossRef]
  14. Kayabol, K.; Zerubia, J. Unsupervised amplitude and texture classification of SAR images with multinomial latent model. IEEE Trans. Image Process. 2013, 22, 561–572. [Google Scholar] [CrossRef]
  15. Amitrano, D.; Cecinati, F.; Di Martino, G.; Iodice, A.; Mathieu, P.-P.P.-P.; Riccio, D.; Ruello, G. Multitemporal Level-1β Products: Definitions, Interpretation, and Applications. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6545–6562. [Google Scholar] [CrossRef]
  16. Di Martire, D.; Paci, M.; Confuorto, P.; Costabile, S.; Guastaferro, F.; Verta, A.; Calcaterra, D. A nation-wide system for landslide mapping and risk management in Italy: The second Not-ordinary Plan of Environmental Remote Sensing. Int. J. Appl. Earth Obs. Geoinf. 2017, 63, 143–157. [Google Scholar] [CrossRef]
  17. Di Martire, D.; Iglesias, R.; Monells, D.; Centolanza, G.; Sica, S.; Ramondini, M.; Pagano, L.; Mallorqui, J.; Calcaterra, D. Comparison between Differential SAR interferometry and ground measurements data in the displacement monitoring of the earth-dam of Conza della Campania (Italy). Remote Sens. Environ. 2014, 148, 58–69. [Google Scholar] [CrossRef]
  18. Tapete, D.; Cigna, F. InSAR data for geohazard assessment in UNESCO World Heritage sites: State-of-the-art and perspectives in the Copernicus era. Int. J. Appl. Earth Obs. Geoinf. 2017, 63, 24–32. [Google Scholar] [CrossRef]
  19. Strozzi, T.; Luckman, A.; Murray, T.; Wegmüller, U.; Werner, C.L. Glacier motion estimation using SAR offset-tracking procedures. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2384–2391. [Google Scholar] [CrossRef]
  20. Amitrano, D.; Guida, R.; Di Martino, G.; Iodice, A. Glacier monitoring using frequency domain offset tracking applied to sentinel-1 images: A product performance comparison. Remote Sens. 2019, 11, 1322. [Google Scholar] [CrossRef]
  21. Amitrano, D.; Guida, R.; Dell’Aglio, D.; Di Martino, G.; Di Martire, D.; Iodice, A.; Costantini, M.; Malvarosa, F.; Minati, F. Long-Term Satellite Monitoring of the Slumgullion Landslide Using Space-Borne Synthetic Aperture Radar Sub-Pixel Offset Tracking. Remote Sens. 2019, 11, 369. [Google Scholar] [CrossRef]
  22. Datcu, M.; Seidel, K. Human-Centered Concepts for Exploration and Understanding of Earth Observation Images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 601–609. [Google Scholar] [CrossRef]
  23. Madhok, V.; Landgrebe, D.A. A Process Model for Remote Sensing Data Analysis. IEEE Trans. Geosci. Remote Sens. 2002, 40, 680–686. [Google Scholar] [CrossRef]
  24. Amitrano, D.; Guida, R.; Ruello, G. Multitemporal SAR RGB Processing for Sentinel-1 GRD Products: Methodology and Applications. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1497–1507. [Google Scholar] [CrossRef]
  25. Mendel, J.M. Fuzzy Logic Systems for Engineering: A Tutorial. Proc. IEEE 1995, 83, 345–377. [Google Scholar] [CrossRef]
  26. Santini, S.; Gupta, A.; Jain, R. Emergent semantics through interaction in image databases. IEEE Trans. Knowl. Data Eng. 2001, 13, 337–351. [Google Scholar] [CrossRef]
  27. Atkin, A. Peirce’s Theory of Signs; Zalta, E.N., Ed.; The Stanford Encyclopedia of Philosophy: Stanford, CA, USA, 2013; Available online: https://plato.stanford.edu/ (accessed on 2 March 2020).
  28. Marr, D. Vision; W. H. Freeman: San Francisco, CA, USA, 1982. [Google Scholar]
  29. Esch, T.; Thiel, M.; Schenk, A.; Roth, A.; Muller, A. Delineation of Urban Footprints From TerraSAR-X Data by Analyzing Speckle Characteristics and Intensity Information. IEEE Trans. Geosci. Remote Sens. 2003, 48, 905–916. [Google Scholar] [CrossRef]
  30. Freeman, A. SAR Calibration: An Overview. IEEE Trans. Geosci. Remote Sens. 1992, 30, 1107–1121. [Google Scholar] [CrossRef]
  31. Italian Space Agency. COSMO-SkyMed Mission and Products Description; IEEE: New York, NY, USA, 2019. [Google Scholar]
  32. Infoterra Radiometric Calibration of TerraSAR-X Data. 2008. Available online: https://www.asi.it/wp-content/uploads/2019/08/COSMO-SkyMed-Mission-and-Products-Description_rev3-2.pdf (accessed on 1 June 2020).
  33. Torre, A.; Calabrese, D.; Porfilio, M. COSMO-SkyMed: Image quality achievements. In Proceedings of the 5th International Conference on Recent Advances in Space Technologies—RAST2011, Instanbal, Turkey, 9–11 June 2011; pp. 861–864. [Google Scholar]
  34. Schwerdt, M.; Schmidt, K.; Ramon, N.T.; Klenk, P.; Yague-Martinez, N.; Prats-Iraola, P.; Zink, M.; Geudtner, D. Independent system calibration of Sentinel-1B. Remote Sens. 2017, 9, 511. [Google Scholar] [CrossRef]
  35. Franceschetti, G.; Lanari, R. Synthetic Aperture Radar Processing; CRC Press: Boca Raton, FL, USA, 1999. [Google Scholar]
  36. Di Martino, G.; Poderico, M.; Poggi, G.; Riccio, D.; Verdoliva, L. Benchmarking Framework for SAR Despeckling. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1596–1615. [Google Scholar] [CrossRef]
  37. De Grandi, G.F.; Leysen, M.; Lee, J.S.; Schuler, D. Radar reflectivity estimation using multiple SAR scenes of the same target: Technique and applications. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, San Diego, CA, USA, 27 July–1 August 1997; pp. 1047–1050. [Google Scholar]
  38. Yu, Y.; Acton, S.T. Speckle Reducing Anisotropic Diffusion. IEEE Trans. Image Process. 2002, 11, 1260–1270. [Google Scholar]
  39. Su, X.; Deledalle, C.; Tupin, F.; Sun, H. Two-Step Multitemporal Nonlocal Means for Synthetic Aperture Radar Images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6181–6196. [Google Scholar]
  40. Cozzolino, D.; Verdoliva, L.; Scarpa, G.; Poggi, G. Nonlocal SAR Image Despeckling by Convolutional Neural Networks. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 5117–5120. [Google Scholar]
  41. Wang, P.; Zhang, H.; Patel, V.M. SAR Image Despeckling Using a Convolutional Neural Network. IEEE Signal Process. Lett. 2017, 24, 1763–1767. [Google Scholar] [CrossRef]
  42. Cao, X.; Ji, Y.; Wang, L.; Ji, B.; Jiao, L.; Han, J. SAR image change detection based on deep denoising and CNN. IET Image Process. 2019, 13, 1509–1515. [Google Scholar] [CrossRef]
  43. Ferraioli, G.; Pascazio, V.; Vitale, S. A Novel Cost Function for Despeckling using Convolutional Neural Networks. In Proceedings of the 2019 Joint Urban Remote Sensing Event (JURSE), Vannes, France, 22–24 May 2019. [Google Scholar]
  44. Singh, A. Digital change detection techniques using remotely-sensed data. Int. J. Remote Sens. 1989, 10, 989–1003. [Google Scholar] [CrossRef]
  45. Lu, D.; Mausel, P.; Brondízio, E.; Moran, E. Change detection techniques. Int. J. Remote Sens. 2004, 25, 2365–2407. [Google Scholar] [CrossRef]
  46. Lee, J.S.; Jurkevich, I. Segmentation of SAR images. IEEE Trans. Geosci. Remote Sens. 1989, 27, 674–680. [Google Scholar] [CrossRef]
  47. Bruzzone, L.; Prieto, D.F. Automatic analysis of the difference image for unsupervised change detection. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1171–1182. [Google Scholar] [CrossRef]
  48. Bovolo, F.; Bruzzone, L. The Time Variable in Data Fusion: A Change Detection Perspective. IEEE Geosci. Remote Sens. Mag. 2015, 3, 8–26. [Google Scholar] [CrossRef]
  49. Rignot, E.J.M.; Zyl, J.J. Change Detection Techniques for ERS-1 SAR Data. IEEE Trans. Geosci. Remote Sens. 1993, 31, 896–906. [Google Scholar] [CrossRef]
  50. Bazi, Y.; Bruzzone, L.; Melgani, F. An Unsupervised Approach Based on the Generalized Gaussian Model to Automatic Change Detection in Multitemporal SAR Images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 874–887. [Google Scholar] [CrossRef]
  51. Grover, K.; Quegan, S.; Da Costa Freitas, C. Quantitative estimation of tropical forest cover by SAR. IEEE Trans. Geosci. Remote Sens. 1999, 37, 479–490. [Google Scholar] [CrossRef]
  52. Bovolo, F.; Bruzzone, L. A Detail-Preserving Scale-Driven Approach to Change Detection in Multitemporal SAR Images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2963–2972. [Google Scholar] [CrossRef]
  53. Amitrano, D.; Di Martino, G.; Iodice, A.; Riccio, D.; Ruello, G. Small Reservoirs Extraction in Semiarid Regions Using Multitemporal Synthetic Aperture Radar Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3482–3492. [Google Scholar] [CrossRef]
  54. Cian, F.; Marconcini, M.; Ceccato, P. Normalized Difference Flood Index for rapid flood mapping: Taking advantage of EO big data. Remote Sens. Environ. 2018, 209, 712–730. [Google Scholar] [CrossRef]
  55. Inglada, J.; Mercier, G. A New Statistical Similarity Measure for Change Detection in Multitemporal SAR Images and Its Extension to Multiscale Change Analysis. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1432–1445. [Google Scholar] [CrossRef]
  56. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Zoppetti, C. Nonparametric Change Detection in Multitemporal SAR Images Based on Mean-Shift Clustering. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2022–2031. [Google Scholar] [CrossRef]
  57. Lombardo, P.; Oliver, C.J. Maximum likelihood approach to the detection of changes between multitemporal SAR images. IEE Proc. Radar Sonar Navig. 2001, 148, 200–210. [Google Scholar] [CrossRef]
  58. Xiong, B.; Chen, J.M.; Kuang, G. A change detection measure based on a likelihood ratio and statistical properties of SAR intensity images. Remote Sens. Lett. 2012, 3, 267–275. [Google Scholar] [CrossRef]
  59. Su, X.; Deledalle, C.; Tupin, F.; Sun, H. NORCAMA: Change Analysis in SAR Time Series by Likelihood Ratio Change Matrix Clustering. ISPRS J. Photogramm. Remote Sens. 2015, 101, 247–261. [Google Scholar] [CrossRef]
  60. Conradsen, K.; Nielsen, A.; Schou, J.; Skriver, H. A test statistic in the complex wishart distribution and its application to change detection in polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 4–19. [Google Scholar] [CrossRef]
  61. Otsu, N. A threshold section method from gray-level histograms. IEEE Trans. Syst. Man. Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  62. Bazi, Y.; Bruzzone, L.; Melgani, F. Automatic identification of the number and values of decision thresholds in the log-ratio image for change detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2006, 3, 349–353. [Google Scholar] [CrossRef]
  63. Moser, G.; Serpico, S.B. Generalized Minimum-Error Thresholding for Unsupervised Change Detection From SAR Amplitude Imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2972–2982. [Google Scholar] [CrossRef]
  64. Zhao, J.; Yang, J.; Lu, Z.; Li, P.; Liu, W.; Yang, L. A novel method of change detection in bi-temporal PolSAR data using a joint-classification classifier based on a similarity measure. Remote Sens. 2017, 9, 846. [Google Scholar] [CrossRef]
  65. Martinez, J.; Le Toan, T. Mapping of flood dynamics and spatial distribution of vegetation in the Amazon floodplain using multitemporal SAR data. Remote Sens. Environ. 2007, 108, 209–223. [Google Scholar] [CrossRef]
  66. Bujor, F.; Trouvé, E.; Valet, L.; Nicolas, J.-M.M.; Rudant, J.-P.P. Application of Log-Cumulants to the Detection of Spatiotemporal Discontinuities in Multitemporal SAR Images. IEEE Trans. Geosci. Remote Sens. 2004, 42, 2073–2084. [Google Scholar] [CrossRef]
  67. Ghamisi, P.; Rasti, B.; Yokoya, N.; Wang, Q.; Hofle, B.; Bruzzone, L.; Bovolo, F.; Chi, M.; Anders, K.; Gloaguen, R.; et al. Multisource and multitemporal data fusion in remote sensing: A comprehensive review of the state of the art. IEEE Geosci. Remote Sens. Mag. 2019, 7, 6–39. [Google Scholar] [CrossRef]
  68. Pohl, C.; Van Genderen, J.L. Review Article Multisensor Image Fusion in Remote Sensing: Concepts, Methods and Applications. Int. J. Remote Sens. 1998, 19, 823–854, ISBN 0143116982157. [Google Scholar] [CrossRef]
  69. Lehmann, E.A.; Caccetta, P.A.; Zhou, Z.S.; McNeill, S.J.; Wu, X.; Mitchell, A.L. Joint processing of landsat and ALOS-PALSAR data for forest mapping and monitoring. IEEE Trans. Geosci. Remote Sens. 2012, 50, 55–67. [Google Scholar] [CrossRef]
  70. Reiche, J.; Verbesselt, J.; Hoekman, D.; Herold, M. Fusing Landsat and SAR time series to detect deforestation in the tropics. Remote Sens. Environ. 2015, 156, 276–293. [Google Scholar] [CrossRef]
  71. Poulain, V.; Inglada, J.; Spigai, M.; Tourneret, J.Y.; Marthon, P. High-resolution optical and SAR image fusion for building database updating. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2900–2910. [Google Scholar] [CrossRef]
  72. Smets, P. What is Dempster-Shafer’s model? In Advances in the Dempster-Shafer Theory of Evidence; John Wiley & Sons, Inc.: New York, NY, USA, 1994; pp. 5–34. [Google Scholar]
  73. Brunner, D.; Lemoine, G.; Bruzzone, L. Earthquake Damage Assessment of Buildings Using VHR Optical and SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2403–2420. [Google Scholar] [CrossRef]
  74. Errico, A.; Angelino, C.V.; Cicala, L.; Persechino, G.; Ferrara, C.; Lega, M.; Vallario, A.; Parente, C.; Masi, G.; Gaetano, R.; et al. Detection of environmental hazards through the feature-based fusion of optical and SAR data: A case study in southern Italy. Int. J. Remote Sens. 2015, 36, 3345–3367. [Google Scholar] [CrossRef]
  75. Polychronaki, A.; Gitas, I.Z.; Veraverbeke, S.; Debien, A. Evaluation of ALOS PALSAR imagery for burned area mapping in greece using object-based classification. Remote Sens. 2013, 5, 5680–5701. [Google Scholar] [CrossRef]
  76. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  77. Amitrano, D.; Cecinati, F.; Di Martino, G.; Iodice, A.; Mathieu, P.-P.; Riccio, D.; Ruello, G. Feature Extraction From Multitemporal SAR Images Using Selforganizing Map Clustering and Object-Based Image Analysis. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1556–1570. [Google Scholar] [CrossRef]
  78. Amitrano, D.; Di Martino, G.; Iodice, A.; Riccio, D.; Ruello, G. RGB SAR products: Methods and applications. Eur. J. Remote Sens. 2016, 49, 777–793. [Google Scholar] [CrossRef]
  79. Salentinig, A.; Gamba, P. A General Framework for Urban Area Extraction Exploiting Multiresolution SAR Data Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2009–2018. [Google Scholar] [CrossRef]
  80. Landuyt, L.; Van Wesemael, A.; Schumann, G.J.P.; Hostache, R.; Verhoest, N.E.C.; Van Coillie, F.M.B. Flood Mapping Based on Synthetic Aperture Radar: An Assessment of Established Approaches. IEEE Trans. Geosci. Remote Sens. 2019, 57, 722–739. [Google Scholar] [CrossRef]
  81. Saatchi, S.S.; Soares, J.V.; Alves, D.S. Mapping deforestation and land use in Amazon rainforest by using SIR-C imagery. Remote Sens. Environ. 1997, 59, 191–202. [Google Scholar] [CrossRef]
  82. Amitrano, D.; Ciervo, F.; Di Martino, G.; Papa, M.N.; Iodice, A.; Koussoube, Y.; Mitidieri, F.; Riccio, D.; Ruello, G. Modeling Watershed Response in Semiarid Regions With High-Resolution Synthetic Aperture Radars. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2732–2745. [Google Scholar] [CrossRef]
  83. Milillo, P.; Fielding, E.J.; Shulz, W.H.; Delbridge, B.; Burgmann, R. COSMO-SkyMed spotlight interferometry over rural areas: The slumgullion landslide in Colorado, USA. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2919–2926. [Google Scholar] [CrossRef]
  84. Wang, C.; Mao, X.; Wang, Q. Landslide displacement monitoring by a fully polarimetric SAR offset tracking method. Remote Sens. 2016, 8, 624. [Google Scholar] [CrossRef]
  85. Lemos, A.; Shepherd, A.; McMillan, M.; Hogg, A.E.; Hatton, E.; Joughin, I. Ice velocity of Jakobshavn Isbræ, Petermann Glacier, Nioghalvfjerdsfjorden, and Zachariæ Isstrøm, 2015–2017, from Sentinel 1-a/b SAR imagery. Cryosphere 2018, 12, 2087–2097. [Google Scholar] [CrossRef]
  86. Dellepiane, S.G.; Angiati, E. A New Method for Cross-Normalization and Multitemporal Visualization of SAR Images for the Detection of Flooded Areas. IEEE Trans. Geosci. Remote Sens. 2012, 50, 2765–2779. [Google Scholar] [CrossRef]
  87. Nakmuenwai, P.; Yamazaki, F.; Liu, W. Multi-Temporal Correlation Method for Damage Assessment of Buildings from High-Resolution SAR Images of the 2013 Typhoon Haiyan. J. Disaster Res. 2016, 11, 577–592. [Google Scholar] [CrossRef]
  88. Refice, A.; Capolongo, D.; Pasquariello, G.; D’Addabbo, A.; Bovenga, F.; Nutricato, R.; Lovergine, F.P.; Pietranera, L. SAR and InSAR for Flood Monitoring: Examples with COSMO-SkyMed Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2711–2722. [Google Scholar] [CrossRef]
  89. Amitrano, D.; Cecinati, F.; Di Martino, G.; Iodice, A.; Mathieu, P.-P.; Riccio, D.; Ruello, G. An end-user-oriented framework for RGB representation of multitemporal SAR images and visual data mining. In Proceedings of the SPIE Remote Sensing, Edinburgh, UK, 26–28 September 2016; Volume 10004. [Google Scholar]
  90. Alves, E.I.; Andrade, A.I.A.S.S.; Vaz, D.A. A Better View over Titan Drainage Networks Through RGB Fusion of Cassini SAR Images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 414–418. [Google Scholar] [CrossRef]
  91. Perrou, T.; Garioud, A.; Parcharidis, I. Use of Sentinel-1 imagery for flood management in a reservoir-regulated river basin. Front. Earth Sci. 2018, 12, 506–520. [Google Scholar] [CrossRef]
  92. Schmitt, M.; Hughes, L.H.; Körner, M.; Zhu, X.X. Colorizing sentinel-1 SAR images using a variational autoencoder conditioned on Sentinel-2 imagery. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Riva del Garda, Italy, 4–7 June 2018; Volume 42, pp. 1045–1051. [Google Scholar]
  93. Colin-Koeniguer, E.; Boulch, A.; Trouvé-Peloux, P.; Janez, F. Colored visualization of multitemporal SAR data for change detection: Issues and methods. In Proceedings of the EUSAR 2018, 12th European Conference on Synthetic Aperture Radar, Aachen, Germany, 4–7 June 2018; pp. 1038–1041. [Google Scholar]
  94. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  95. Amitrano, D.; Di Martino, G.; Iodice, A.; Riccio, D.; Ruello, G. Unsupervised Rapid Flood Mapping Using Sentinel-1 GRD SAR Images. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3290–3299. [Google Scholar] [CrossRef]
  96. Fung, A.K. Scattering from a Vegetation Layer. IEEE Trans. Geosci. Electron. 1979, 17, 1–6. [Google Scholar] [CrossRef]
  97. Khellaf, A.; Beghdadi, A.; Dupoisot, H. Entropic Contrast Enhancement. IEEE Trans. Med. Imaging 1991, 10, 589–592. [Google Scholar] [CrossRef]
  98. Jacobson, N.P.; Gupta, M.R.; Cole, J.B. Linear Fusion of Image Sets for Display. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3277–3288. [Google Scholar] [CrossRef]
  99. Healey, C.G.; Booth, K.S.; Enns, J.T. Visualizing real-time multivariate data using preattentive processing. ACM Trans. Model. Comput. Simul. 1995, 5, 190–221. [Google Scholar] [CrossRef]
  100. Mitchard, E.T.A. The tropical forest carbon cycle and climate change. Nature 2018, 559, 527–534. [Google Scholar] [CrossRef]
  101. Waring, R.H.; Running, S.W. Forest Ecosystems Analysis at Multiple Scales; Elsevier: Amsterdam, The Netherlands, 2007; ISBN 9780123706058. [Google Scholar]
  102. Battaglia, M.; Sands, P.J. Process-based forest productivity models and their application in forest management. For. Ecol. Manag. 1998, 102, 13–32. [Google Scholar] [CrossRef]
  103. Wolter, P.T.; Mladenoff, D.J.; Host, G.E.; Crow, T.R. Improved forest classification in the northern Lake States using multitemporal Landsat imagery. Photogramm. Eng. Rem. Sens. 1995, 61, 1129–1143. [Google Scholar]
  104. Holmgren, P.; Thuresson, T. Satellite remote sensing for forestry planning—A review. Scand. J. For. Res. 1998, 13, 90–110. [Google Scholar] [CrossRef]
  105. Iverson, L.R.; Graham, R.L.; Cook, E.A. Applications of satellite remote sensing to forested ecosystems. Landsc. Ecol. 1989, 3, 131–143. [Google Scholar] [CrossRef]
  106. Cohen, W.B.; Kushla, J.D.; Ripple, W.J.; Garman, S.L. An introduction to digital methods in remote sensing of forested ecosystems: Focus on the Pacific Northwest, USA. Environ. Manag. 1996, 20, 421–435. [Google Scholar] [CrossRef]
  107. Goodenough, D.G.; Bhogal, A.S.; Fournier, R.; Hall, R.J.; Iisaka, J.; Leckie, D.; Luther, J.E.; Magnussen, S.; Niemann, O.; Strome, W.M. Earth observation for sustainable development of forests (EOSD). In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Seattle, WA, USA, 6–10 July 1998; pp. 57–60. [Google Scholar]
  108. Landsberg, J.; Gower, S.T. Application of Physiological Ecology to Forest Production; Academic Press: Cambridge, MA, USA, 1996. [Google Scholar]
  109. Lu, M.; Chen, B.; Liao, X.; Yue, T.; Yue, H.; Ren, S.; Li, X.; Nie, Z.; Xu, B. Forest types classification based on multi-source data fusion. Remote Sens. 2017, 9, 1153. [Google Scholar] [CrossRef]
  110. Mariotti D’Alessandro, M.; Tebaldini, S. Digital terrain model retrieval in tropical forests through P-Band SAR tomography. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6774–6781. [Google Scholar] [CrossRef]
  111. Jha, C.S.; Goparaju, L.; Tripathi, A.; Gharai, B.; Raghubanshi, A.S.; Singh, J.S. Forest fragmentation and its impact on species diversity: An analysis using remote sensing and GIS. Biodivers. Conserv. 2005, 14, 1681–1698. [Google Scholar] [CrossRef]
  112. Chen, G.; Meentemeyer, R. Remote Sensing of Forest Damage by Diseases and Insects. In Remote Sensing for Sustainability; Weng, Q., Ed.; CRC Press: Boca Raton, FL, USA, 2016; pp. 145–162. [Google Scholar]
  113. Sunar, F.; Oezkan, C. Forest fire analysis with remote sensing data. Int. J. Remote Sens. 2001, 22, 2265–2277. [Google Scholar] [CrossRef]
  114. Aguilar, A. Remote Sensing of Forest Regeneration in Highland Tropical Forests. GIScience Remote Sens. 2005, 42, 66–79. [Google Scholar] [CrossRef]
  115. Tanase, M.A.; de la Riva, J.; Santoro, M.; Pérez-Cabello, F.; Kasischke, E. Sensitivity of SAR data to post-fire forest regrowth in Mediterranean and boreal forests. Remote Sens. Environ. 2011, 115, 2075–2085. [Google Scholar] [CrossRef]
  116. Kurvonen, L.; Pulliainen, J.T.; Hallikainen, M.T. Retrieval of Biomass in Boreal Forests from Multitemporal ERS-1 and JERS-1 SAR Images. IEEE Trans. Geosci. Remote Sens. 1999, 37, 198–205. [Google Scholar] [CrossRef]
  117. Kennedy, R.E.; Yang, Z.; Cohen, W.B. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 1. LandTrendr—Temporal segmentation algorithms. Remote Sens. Environ. 2010, 114, 2897–2910. [Google Scholar] [CrossRef]
  118. Hansen, M.C.; Shimabukuro, Y.E.; Potapov, P.; Pittman, K. Comparing annual MODIS and PRODES forest cover change data for advancing monitoring of Brazilian forest cover. Remote Sens. Environ. 2008, 112, 3784–3793. [Google Scholar] [CrossRef]
  119. Diniz, C.G.; de Almeida Souza, A.A.; Santos, D.C.; Dias, M.C.; da Luz, N.C.; de Moraes, D.R.V.; Maia, J.S.A.; Gomes, A.R.; da Silva Narvaes, I.; Valeriano, D.M.; et al. DETER-B: The New Amazon Near Real-Time Deforestation Detection System. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3619–3628. [Google Scholar] [CrossRef]
  120. Hansen, M.C.; Krylov, A.; Tyukavina, A.; Potapov, P.V.; Turubanova, S.; Zutta, B.; Ifo, S.; Margono, B.; Stolle, F.; Moore, R. Humid tropical forest disturbance alerts using Landsat data. Environ. Res. Lett. 2016, 11, 034008. [Google Scholar] [CrossRef]
  121. Hansen, M.C.; Potapov, P.V.V.; Moore, R.; Hancher, M.; Turubanova, S.A.; Tyukavina, A.; Thau, D.; Stehman, S.V.; Goetz, S.J.; Loveland, T.R.; et al. High-Resolution Global Maps of 21st-Century Forest Cover Change. Science 2013, 850, 850–854. [Google Scholar] [CrossRef] [PubMed]
  122. Wylie, D.P.; Jackson, D.L.; Menzel, W.P.; Bates, J.J. Global cloud cover trends inferred from two decades of HIRS observations. J. Clim. 2005, 18, 3021–3031. [Google Scholar] [CrossRef]
  123. Sannier, C.; McRoberts, R.E.; Fichet, L.V. Suitability of Global Forest Change data to report forest cover estimates at national level in Gabon. Remote Sens. Environ. 2016, 173, 326–338. [Google Scholar] [CrossRef]
  124. Cunningham, D.; Cunningham, P.; Fagan, M.E. Identifying biases in global tree cover products: A case study in Costa Rica. Forests 2019, 10, 853. [Google Scholar] [CrossRef]
  125. Ulaby, F.T.; Long, D.G. Microwave Radar and Radiometric Remote Sensing; The University of Michigan Press: Ann Arbor, MI, USA, 2014. [Google Scholar]
  126. Hoekman, D.H. Radar backscattering of forest stands. Int. J. Remote Sens. 1985, 6, 325–343. [Google Scholar] [CrossRef]
  127. Westman, W.E.; Paris, J.F. Detecting forest structure and biomass with C-band multipolarization radar: Physical model and field tests. Remote Sens. Environ. 1987, 22, 249–269. [Google Scholar] [CrossRef]
  128. Richards, J.A.; Sun, G.; Simonett, D.S. L-Band Radar Backscatter Modeling of Forest Stands. IEEE Trans. Geosci. Remote Sens. 1987, GE-25, 487–498. [Google Scholar] [CrossRef]
  129. Rignot, E.J.M.; Williams, C.L.; Viereck, L.A. Mapping of Forest Types in Alaskan Boreal Forests Using SAR Imagery. IEEE Trans. Geosci. Remote Sens. 1994, 32, 1051–1059. [Google Scholar] [CrossRef]
  130. Ranson, K.J.; Saatchi, S.; Sun, G. Boreal Forest Ecosystem Characterization with SIR-C/XSAR. IEEE Trans. Geosci. Remote Sens. 1995, 33, 867–876. [Google Scholar] [CrossRef]
  131. Ranson, K.J.; Sun, G. An Evaluation of AIRSAR and SIR-C/X-SAR Images for Mapping Northern Forest Attributes in Maine, USA. Remote Sens. Environ. 1995, 59, 202–223. [Google Scholar] [CrossRef]
  132. Pierce, L.E.; Bergen, K.M.; Dobson, M.C.; Ulaby, F.T. Multitemporal land-cover classification using SIR-C/X-SAR imagery. Remote Sens. Environ. 1998, 64, 20–33. [Google Scholar] [CrossRef]
  133. Ortiz, S.M.; Breidenbach, J.; Knuth, R.; Kändler, G. The influence of DEM quality on mapping accuracy of coniferous- and deciduous-dominated forest using TerraSAR-X images. Remote Sens. 2012, 4, 661–681. [Google Scholar] [CrossRef]
  134. Saatchi, S.S.; Rignot, E. Classification of Boreal forest cover types using SAR images. Remote Sens. Environ. 1997, 60, 270–281. [Google Scholar] [CrossRef]
  135. Fassnacht, F.E.; Latifi, H.; Stereńczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.T.; Straub, C.; Ghosh, A. Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
  136. Shimada, M.; Itoh, T.; Motooka, T.; Watanabe, M.; Shiraishi, T.; Thapa, R.; Lucas, R. New global forest/non-forest maps from ALOS PALSAR data (2007–2010). Remote Sens. Environ. 2014, 155, 13–31. [Google Scholar] [CrossRef]
  137. Mermoz, S.; Le Toan, T. Forest disturbances and regrowth assessment using ALOS PALSAR data from 2007 to 2010 in Vietnam, Cambodia and Lao PDR. Remote Sens. 2016, 8, 271. [Google Scholar] [CrossRef]
  138. Lei, Y.; Lucas, R.; Siqueira, P.; Schmidt, M.; Treuhaft, R. Detection of forest disturbance with spaceborne repeat-pass SAR interferometry. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2424–2439. [Google Scholar] [CrossRef]
  139. Almeida-Filho, R.; Rosenqvist, A.; Shimabukuro, Y.E.; Silva-Gomez, R. Detecting deforestation with multitemporal L-band SAR imagery: A case study in western Brazilian Amazônia. Int. J. Remote Sens. 2007, 28, 1383–1390. [Google Scholar] [CrossRef]
  140. Watanabe, M.; Koyama, C.N.; Hayashi, M.; Nagatani, I.; Shimada, M. Early-stage deforestation detection in the tropics with L-band SAR. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2127–2133. [Google Scholar] [CrossRef]
  141. Ryan, C.M.; Hill, T.; Woollen, E.; Ghee, C.; Mitchard, E.; Cassells, G.; Grace, J.; Woodhouse, I.H.; Williams, M. Quantifying small-scale deforestation and forest degradation in African woodlands using radar imagery. Glob. Chang. Biol. 2012, 18, 243–257. [Google Scholar] [CrossRef]
  142. Almeida-Filho, R.; Shimabukuro, Y.E.; Rosenqvist, A.; Sánchez, G.A. Using dual-polarized ALOS PALSAR data for detecting new fronts of deforestation in the Brazilian Amazônia. Int. J. Remote Sens. 2009, 30, 3735–3743. [Google Scholar] [CrossRef]
  143. Motohka, T.; Shimada, M.; Uryu, Y.; Setiabudi, B. Using time series PALSAR gamma nought mosaics for automatic detection of tropical deforestation: A test study in Riau, Indonesia. Remote Sens. Environ. 2014, 155, 79–88. [Google Scholar] [CrossRef]
  144. Santoro, M.; Fransson, J.E.S.; Eriksson, L.E.B.; Ulander, L.M.H. Clear-Cut Detection in Swedish Boreal Forest Using Multi-Temporal ALOS PALSAR Backscatter Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2010, 3, 618–631. [Google Scholar] [CrossRef]
  145. Joshi, N.P.; Mitchard, E.T.A.; Schumacher, J.; Johannsen, V.K.; Saatchi, S.; Fensholt, R. L-Band SAR Backscatter Related to Forest Cover, Height and Aboveground Biomass at Multiple Spatial Scales across Denmark. Remote Sens. 2015, 7, 4442–4472. [Google Scholar] [CrossRef]
  146. Whittle, M.; Quegan, S.; Uryu, Y.; Stüewe, M.; Yulianto, K. Detection of tropical deforestation using ALOS-PALSAR: A Sumatran case study. Remote Sens. Environ. 2012, 124, 83–98. [Google Scholar] [CrossRef]
  147. Walker, W.S.; Kellndorfer, J.M.; Kirsch, K.M.; Stickler, C.M.; Nepstad, D.C.; Stickler, C.M.; Nepstad, D.C. Large-Area Classification and Mapping of Forest and Land Cover in the Brazilian Amazon: A Comparative Analysis of ALOS/PALSAR and Landsat Data Sources. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2010, 3, 594–604. [Google Scholar] [CrossRef]
  148. Reiche, J.; Souzax, C.M.; Hoekman, D.H.; Verbesselt, J.; Persaud, H.; Herold, M. Feature level fusion of multi-temporal ALOS PALSAR and Landsat data for mapping and monitoring of tropical deforestation and forest degradation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2159–2173. [Google Scholar] [CrossRef]
  149. Wegmüller, U.; Werner, C.L. SAR Interferometric Signatures of Forest. IEEE Trans. Geosci. Remote Sens. 1995, 33, 1153–1161. [Google Scholar] [CrossRef]
  150. Quegan, S.; Le Toan, T.; Yu, J.J.; Ribbes, F.; Floury, N.; Le Toan, T.; Yu, J.J.; Ribbes, F.; Floury, N. Multitemporal ERS SAR analysis applied to forest mapping. IEEE Trans. Geosci. Remote Sens. 2000, 38, 741–753. [Google Scholar] [CrossRef]
  151. Hoekman, D. Monitoring Tropical Peat Swamp Deforestation and Hydrological Dynamics by ASAR and PALSAR. In Geoscience and Remote Sensing; Ho, P.-G., Ed.; IntechOpen: London, UK, 2009. [Google Scholar]
  152. Bouvet, A.; Mermoz, S.; Ballère, M.; Koleck, T.; Le Toan, T. Use of the SAR Shadowing Effect for Deforestation Detection with Sentinel-1 Time Series. Remote Sens. 2018, 10, 1250. [Google Scholar] [CrossRef]
  153. Lohberger, S.; Stängel, M.; Atwood, E.C.; Siegert, F. Spatial evaluation of Indonesia’s 2015 fire-affected area and estimated carbon emissions using Sentinel-1. Glob. Chang. Biol. 2018, 24, 644–654. [Google Scholar] [CrossRef]
  154. Reiche, J.; Hamunyela, E.; Verbesselt, J.; Hoekman, D.; Herold, M. Improving near-real time deforestation monitoring in tropical dry forests by combining dense Sentinel-1 time series with Landsat and ALOS-2 PALSAR-2. Remote Sens. Environ. 2018, 204, 147–161. [Google Scholar] [CrossRef]
  155. Sica, F.; Pulella, A.; Rizzoli, P.; Aerospace, G.; Straße, M. Forest Classification and Deforestation Mapping By Means of Sentinel-1. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 2635–2638. [Google Scholar]
  156. Schlund, M.; von Poncet, F.; Hoekman, D.H.; Kuntz, S.; Schmullius, C. Importance of bistatic SAR features from TanDEM-X for forest mapping and monitoring. Remote Sens. Environ. 2014, 151, 16–26. [Google Scholar] [CrossRef]
  157. Martone, M.; Rizzoli, P.; Wecklich, C.; González, C.; Bueso-Bello, J.L.; Valdo, P.; Schulze, D.; Zink, M.; Krieger, G.; Moreira, A. The global forest/non-forest map from TanDEM-X interferometric SAR data. Remote Sens. Environ. 2018, 205, 352–373. [Google Scholar] [CrossRef]
  158. Bird, R.; Whittaker, P.; Stern, B.; Angli, N.; Cohen, M.; Guida, R. NovaSAR-S a low cost approach to sar applications, synthetic aperture radar. In Proceedings of the IEEE 2013 Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Tsukuba, Japan, 23–27 September 2013. [Google Scholar]
  159. Ningthoujam, R.K.; Tansey, K.; Balzter, H.; Morrison, K.; Johnson, S.C.M.; Gerard, F.; George, C.; Burbidge, G.; Doody, S.; Veck, N.; et al. Mapping forest cover and forest cover change with airborne S-band radar. Remote Sens. 2016, 8, 577. [Google Scholar] [CrossRef]
  160. Sun, G.; Ranson, K.J. Radar modelling of forest spatial patterns. Int. J. Remote Sens. 1998, 19, 1769–1791. [Google Scholar] [CrossRef]
  161. Kuplich, T.M.; Curran, P.J.; Atkinson, P.M. Relating SAR image texture to the biomass of regenerating tropical forests. Int. J. Remote Sens. 2005, 26, 4829–4854. [Google Scholar] [CrossRef]
  162. Englhart, S.; Keuck, V.; Siegert, F. Modeling aboveground biomass in tropical forests using multi-frequency SAR data-A comparison of methods. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 298–306. [Google Scholar] [CrossRef]
  163. Yu, Y.; Saatchi, S. Sensitivity of L-band SAR backscatter to aboveground biomass of global forests. Remote Sens. 2016, 8, 522. [Google Scholar] [CrossRef]
  164. Mermoz, S.; Réjou-Méchain, M.; Villard, L.; Le Toan, T.; Rossi, V.; Gourlet-Fleury, S. Decrease of L-band SAR backscatter with biomass of dense forests. Remote Sens. Environ. 2015, 159, 307–317. [Google Scholar] [CrossRef]
  165. Cartus, O.; Santoro, M.; Kellndorfer, J. Mapping forest aboveground biomass in the Northeastern United States with ALOS PALSAR dual-polarization L-band. Remote Sens. Environ. 2012, 124, 466–478. [Google Scholar] [CrossRef]
  166. Lucas, R.; Bunting, P.; Clewley, D.; Armston, J.; Fairfax, R.; Fensham, R.; Accad, A.; Kelley, J.; Laidlaw, M.; Eyre, T.; et al. An Evaluation of the ALOS PALSAR L-Band Backscatter—Above Ground Biomass Relationship Queensland, Australia: Impacts of Surface Moisture Condition and Vegetation Structure. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2010, 3, 576–593. [Google Scholar] [CrossRef]
  167. Neumann, M.; Saatchi, S.S.; Ulander, L.M.H.; Fransson, J.E.S. Assessing performance of L- and P-band polarimetric interferometric SAR data in estimating boreal forest above-ground biomass. IEEE Trans. Geosci. Remote Sens. 2012, 50, 714–726. [Google Scholar] [CrossRef]
  168. Luckman, A.; Baker, J.; Kuplich, T.M.; Corina da Costa, F.Y.; Alejandro, C.F. A study of the relationship between radar backscatter and regenerating tropical forest biomass for spaceborne SAR instruments. Remote Sens. Environ. 1997, 60, 1–13. [Google Scholar] [CrossRef]
  169. Berninger, A.; Lohberger, S.; Stängel, M.; Siegert, F. SAR-based estimation of above-ground biomass and its changes in tropical forests of Kalimantan using L- and C-band. Remote Sens. 2018, 10, 831. [Google Scholar] [CrossRef]
  170. Laurin, G.V.; Balling, J.; Corona, P.; Mattioli, W.; Papale, D.; Puletti, N.; Rizzo, M.; Truckenbrodt, J.; Urban, M. Above-ground biomass prediction prediction by Sentinel-1 multitemporal data in central Italy with inetgration of ALOS2 and Sentinel. J. Appl. Remote Sens. 2018, 12, 16008. [Google Scholar] [CrossRef]
  171. Ningthoujam, R.K.; Balzter, H.; Tansey, K.; Morrison, K.; Johnson, S.C.M.; Gerard, F.; George, C.; Malhi, Y.; Burbidge, G.; Doody, S.; et al. Airborne S-band SAR for forest biophysical retrieval in temperate mixed forests of the UK. Remote Sens. 2016, 8, 609. [Google Scholar] [CrossRef]
  172. Ningthoujam, R.K.; Balzter, H.; Tansey, K.; Feldpausch, T.R.; Mitchard, E.T.A.; Wani, A.A.; Joshi, P.K. Relationships of S-band radar backscatter and forest aboveground biomass in different forest types. Remote Sens. 2017, 9, 1116. [Google Scholar] [CrossRef]
  173. Ulaby, F.T.; Sarabandi, K.; Mcdonald, K.; Whitt, M.; Dobson, M.C. Michigan microwave canopy scattering model. Int. J. Remote Sens. 1990, 11, 1223–1253. [Google Scholar] [CrossRef]
  174. Yatabe, S.M.; Leckie, D.G. Clearcut and forest-type discrimination in satellite SAR imagery. Can. J. Remote Sens. 1995, 21, 455–467. [Google Scholar] [CrossRef]
  175. Fransson, J.E.S.; Walter, F.; Olsson, H. Identification of clear felled areas using Spot P and Almaz-1 SAR data. Int. J. Remote Sens. 1999, 20, 3583–3593. [Google Scholar] [CrossRef]
  176. Imhoff, M.L. Radar backscatter and biomass saturation: Ramifications for global biomass inventory. IEEE Trans. Geosci. Remote Sens. 1995, 33, 511–518. [Google Scholar] [CrossRef]
  177. Lucas, R.M.; Cronin, N.; Lee, A.; Moghaddam, M.; Witte, C.; Tickle, P. Empirical relationships between AIRSAR backscatter and LiDAR-derived forest biomass, Queensland, Australia. Remote Sens. Environ. 2006, 100, 407–425. [Google Scholar] [CrossRef]
  178. Santoro, M.; Eriksson, L.; Askne, J.; Schmullius, C. Assessment of stand-wise stem volume retrieval in boreal forest from JERS-1 L-band SAR backscatter. Int. J. Remote Sens. 2006, 27, 3425–3454. [Google Scholar] [CrossRef]
  179. Tanase, M.A.; Kennedy, R.; Aponte, C. Fire severity estimation from space: A comparison of active and passive sensors and their synergy for different forest types. Int. J. Wildl. Fire 2015, 24, 1062–1075. [Google Scholar] [CrossRef]
  180. Chuvieco, E.; Mouillot, F.; van der Werf, G.R.; San Miguel, J.; Tanasse, M.; Koutsias, N.; García, M.; Yebra, M.; Padilla, M.; Gitas, I.; et al. Historical background and current developments for mapping burned area from satellite Earth observation. Remote Sens. Environ. 2019, 225, 45–64. [Google Scholar] [CrossRef]
  181. Kasischke, E.S.; Bourgeau-Chavez, L.L.; Johnstone, J.F. Assessing spatial and temporal variations in surface soil moisture in fire-disturbed black spruce forests in Interior Alaska using spaceborne synthetic aperture radar imagery—Implications for post-fire tree recruitment. Remote Sens. Environ. 2007, 108, 42–58. [Google Scholar] [CrossRef]
  182. Siegert, F.; Ruecker, G. Use of multitemporal ERS-2 SAR images for identification of burned scars in south-east Asian tropical rainforest. Int. J. Remote Sens. 2000, 21, 831–837. [Google Scholar] [CrossRef]
  183. Siegert, F.; Hoffmann, A.A. The 1998 Forest Fires in East Kalimantan (Indonesia): A Quantitative Evaluation Using High Resolution, Multitemporal ERS-2 SAR Images and NOAA-AVHRR Hotspot Data. Remote Sens. Environ. 1998, 72, 64–77. [Google Scholar] [CrossRef]
  184. Kalogirou, V.; Ferrazzoli, P.; Vecchia, A.D.; Foumelis, M. On the SAR backscatter of burned forests: A model-based study in C-band, over burned pine canopies. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6205–6215. [Google Scholar] [CrossRef]
  185. Tanase, M.A.; Santoro, M.; Wegmüller, U.; de la Riva, J.; Pérez-Cabello, F. Properties of X-, C- and L-band repeat-pass interferometric SAR coherence in Mediterranean pine forests affected by fires. Remote Sens. Environ. 2010, 114, 2182–2194. [Google Scholar] [CrossRef]
  186. Bourgeau-Chavez, L.L.; Harrell, P.A.; Kasischke, E.S.; French, N.H.F. The detection and mapping of Alaskan wildfires using a spaceborne imaging radar system. Int. J. Remote Sens. 1997, 18, 355–373. [Google Scholar] [CrossRef]
  187. Bourgeau-Chavez, L.L.; Kasischke, E.S.; Brunzell, S.; Mudd, J.P. Mapping fire scars in global boreal forests using imaging radar data. Int. J. Remote Sens. 2002, 23, 4211–4234. [Google Scholar] [CrossRef]
  188. Huang, S.; Siegert, F. Backscatter change on fire scars in Siberian boreal forests in ENVISAT ASAR wide-swath images. IEEE Geosci. Remote Sens. Lett. 2006, 3, 154–158. [Google Scholar] [CrossRef]
  189. Rignot, E.; Despain, D.G.; Holecz, F. The 1988 Yellowstone fires observed by imaging radars. In Proceedings of the Joint Fire Sciences Conference and Workshop, Boise, Idaho, 15–17 June 1999; pp. 1–9. Available online: https://minerva-access.unimelb.edu.au/bitstream/handle/11343/55488/LTM_SAR%20paper.pdf?sequence=1 (accessed on 3 January 2020).
  190. Menges, C.H.; Bartolo, R.E.; Bell, D.; Hill, G.J.E. The effect of savanna fires on SAR backscatter in northern Australia. Int. J. Remote Sens. 2004, 25, 4857–4871. [Google Scholar] [CrossRef]
  191. Imperatore, P.; Azar, R.; Calo, F.; Stroppiana, D.; Brivio, P.A.; Lanari, R.; Pepe, A. Effect of the Vegetation Fire on Backscattering: An Investigation Based on Sentinel-1 Observations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4478–4492. [Google Scholar] [CrossRef]
  192. Tanase, M.A.; Santoro, M.; De La Riva, J.; Pérez-Cabello, F.; Le Toan, T. Sensitivity of X-, C-, and L-band SAR backscatter to burn severity in Mediterranean pine forests. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3663–3675. [Google Scholar] [CrossRef]
  193. GImeno, M.; San Miguel-Ayanz, J.; Schmuck, G. Identification of burnt areas in Mediterranean forest environments from ERS-2 SAR time series. Int. J. Remote Sens. 2004, 25, 4873–4888. [Google Scholar] [CrossRef]
  194. United Nations. The United Nations World Water Development Report 2019—Leaving No One Behind; United Nations: New York, NY, USA, 2019. [Google Scholar]
  195. Burek, P.; Satoh, Y.; Fischer, G.; Kahil, M.T.; Scherzer, A.; Tramberend, S.; Nava, L.F.; Wada, Y.; Eisner, S.; Flörke, M.; et al. Water Futures and Solution: Fast Track Initiative (Final Report); IIASA Working Paper: Laxenburg, Austria, 2016. [Google Scholar]
  196. Heim, R.R. A review of twentieth-century drought indices used in the United States. Bull. Am. Meteorol. Soc. 2002, 83, 1149–1165. [Google Scholar] [CrossRef]
  197. Karpatne, A.; Khandelwal, A.; Chen, X.; Mithal, V.; Faghmous, J.; Kumar, V. Global monitoring of inland water dynamics: State-of-theart, challenges, and opportunities. In Computational Sustainabilit; Lässig, J., Kersting, K., Morik, K., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 121–147. [Google Scholar]
  198. Huang, C.; Chen, Y.; Zhang, S.; Wu, J. Detecting, Extracting, and Monitoring Surface Water From Space Using Optical Sensors: A Review. Rev. Geophys. 2018, 56, 333–360. [Google Scholar] [CrossRef]
  199. Boelee, E.; Cecchi, P.; Kone, A. Health Impacts of Small Reservoirs in Burkina Faso; Sri Lanka: Colombo, Sri Lanka, 2009. [Google Scholar]
  200. Choi, M.; Jacobs, J.M.; Anderson, M.C.; Bosch, D.D. Evaluation of drought indices via remotely sensed data with hydrological variables. J. Hydrol. 2013, 476, 265–273. [Google Scholar] [CrossRef]
  201. Aghakouchak, A.; Farahmand, A.; Melton, F.S.; Teixeira, J.; Anderson, M.C.; Wardlow, B.D.; Hain, C.R. Remote sensing of drought: Progress, challenges. Rev. Geophys. 2015, 53, 1–29. [Google Scholar] [CrossRef]
  202. Herndon, K.; Muench, R.; Cherrington, E.; Griffin, R. An Assessment of Surface Water Detection Methods for Water Resource Management in the Nigerien Sahel. Sensors 2020, 20, 431. [Google Scholar] [CrossRef]
  203. Donchyts, G.; Baart, F.; Winsemius, H.; Gorelick, N.; Kwadijk, J.; Van De Giesen, N. Earth’s surface water change over the past 30 years. Nat. Clim. Chang. 2016, 6, 810–813. [Google Scholar] [CrossRef]
  204. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  205. Trewavas, A. Malthus foiled again and again. Nature 2002, 418, 668–670. [Google Scholar] [CrossRef] [PubMed]
  206. Arledler, A.; Castracane, P.; Marin, A.; Mica, S.; Pace, G.; Quartulli, M.; Vaglio Laurin, G.; Alfari, I.; Trebossen, H. Detecting water bodies and water related features in the Niger basin area by SAR data: The ESA TIGER WADE project. In Application of Satellite Remote Sensing to Support Water Resources Management in Africa: Results from the TIGER Initiative; IHP-VII Technical Documents in Hydrology; UNESCO: Paris, France, 2010. [Google Scholar]
  207. Annor, F.O.; van De Giesen, N.; Liebe, J.R. Monitoring of Small Reservoirs Storage Using ENVISAT ASAR and SPOT Imagery in the Upper East Region of Ghana. In Application of Satellite Remote Sensing to Support Water Resources Management in Africa: Results from the TIGER Initiative; IHP-VII Technical Documents in Hydrology; UNESCO: Paris, France, 2010. [Google Scholar]
  208. Amitrano, D.; Di Martino, G.; Iodice, A.; Ruello, G.; Ciervo, F.; Papa, M.N.; Koussoube, Y. Effectiveness of high-resolution SAR for water resource management in low- income semi-arid countries. Int. J. Remote Sens. 2014, 35, 70–88. [Google Scholar] [CrossRef]
  209. Amitrano, D.; Di Martino, G.; Iodice, A.; Mitidieri, F.; Papa, M.N.; Riccio, D.; Ruello, G. Sentinel-1 for Monitoring Reservoirs: A Performance Analysis. Remote Sens. 2014, 6, 10676–10693. [Google Scholar] [CrossRef]
  210. Liebe, J.R.; van De Giesen, N.; Andreini, M.; Steenhuis, T.S.; Walter, M.T. Suitability and Limitations of ENVISAT ASAR for Monitoring Small Reservoirs in a Semiarid Area. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1536–1547. [Google Scholar] [CrossRef]
  211. Annor, F.O.; van De Giesen, N.; Liebe, J.R.; van de Zaag, P.; Tilmant, a.; Odai, S.N. Delineation of small reservoirs using radar imagery in a semi-arid environment: A case study in the upper east region of Ghana. Phys. Chem. Earth, Parts A/B/C 2009, 34, 309–315. [Google Scholar] [CrossRef]
  212. Eilander, D.; Annor, F.O.; Iannini, L.; van De Giesen, N. Remotely Sensed Monitoring of Small Reservoir Dynamics: A Bayesian Approach. Remote Sens. 2014, 6, 1191–1210. [Google Scholar] [CrossRef]
  213. Heine, I.; Francke, T.; Rogass, C.; Medeiros, P.H.A.; Bronstert, A.; Foerster, S. Monitoring seasonal changes in the water surface areas of reservoirs using TerraSAR-X time series data in semiarid northeastern Brazil. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 3190–3199. [Google Scholar] [CrossRef]
  214. Baghdadi, N.; Zribi, M.; Loumagne, C.; Ansart, P.; Anguela, T.P. Analysis of TerraSAR-X data and their sensitivity to soil surface parameters over bare agricultural fields. Remote Sens. Environ. 2008, 112, 4370–4379. [Google Scholar] [CrossRef]
  215. Franceschetti, G.; Riccio, D. Scattering, Natural Surfaces, and Fractals; Academic Press: Burlington, MA, USA, 2007. [Google Scholar]
  216. Zheng, B.; Campbell, J.B.; Serbin, G.; Daughtry, C.S.T.; McNairn, H.; Pacheco, A. Remote Sensing of Tillage Status. In Land Resources Monitoring, Modeling, and Mapping with Remote Sensing; Thenkabail, P.S., Ed.; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  217. Perez-saez, J.; Mande, T.; Rinaldo, A. Space and time predictions of schistosomiasis snail host population dynamics across hydrologic regimes in Burkina Faso. Geospat. Health 2019, 14, 796. [Google Scholar] [CrossRef]
  218. Rosen, P.A.; Hensley, S.; Joughin, I.R.; Li, F.K.; Madsen, S.N.; Rodriguer, E.; Goldstein, R.M. Synthetic aperture radar interferometry. Proc. IEEE 2000, 88, 333–382. [Google Scholar] [CrossRef]
  219. Ponce, V.M.; Hawkins, R.H. Runoff curve number: Has it reached maturity? J. Hydrol. Eng. 1996, 1, 11–19. [Google Scholar] [CrossRef]
  220. Zhang, S.; Foerster, S.; Medeiros, P.; de Araújo, J.C.; Motagh, M.; Waske, B. Bathymetric survey of water reservoirs in north-eastern Brazil based on TanDEM-X satellite data. Sci. Total Environ. 2016, 571, 575–593. [Google Scholar] [CrossRef] [PubMed]
  221. Vanthof, V.; Kelly, R. Water storage estimation in ungauged small reservoirs with the TanDEM-X DEM and multi-source satellite observations. Remote Sens. Environ. 2019, 235, 111437. [Google Scholar] [CrossRef]
  222. Liebe, J.R.; van De Giesen, N.; Andreini, M. Estimation of small reservoir storage capacities in semi-arid environment: A case study in the Upper East Region of Ghana. Phys. Chem. Earth. Parts A/B/C 2005, 30, 448–454. [Google Scholar] [CrossRef]
  223. Warren, A.; Batterbury, S.; Osbahr, H. Soil erosion in the West African Sahel: A review and an application of a “local political ecology” approach in South West Niger. Glob. Environ. Chang. 2001, 11, 79–95. [Google Scholar] [CrossRef]
  224. Grimaldi, S.; Angeluccetti, I.V.; Coviello, V.; Vezza, P. Cost-effectiveness of soil and water conservation measures on the catchement sediment budget—The Laaba watershed case study, Burkina Faso. Land Degrad. Dev. 2013, 26, 737–747. [Google Scholar]
  225. Prasad, N.R.; Garg, V.; Thakur, P.K. Role of SAR data in water body mapping and reservoir sedimentation assessment. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 4, 151–158. [Google Scholar] [CrossRef]
  226. United Nations Office for Disaster Risk Reduction. Economic Lossess, Poverty & Disasters 1998–2017; Centre for Research on the Epidemiology of Disasters, CRED: Brussels, Belgium, 2017. [Google Scholar]
  227. Garcia-Pintado, J.; Mason, D.C.; Dance, S.L.; Cloke, H.L.; Neal, J.C.; Freer, J.; Bates, P.D. Satellite-supported flood forecasting in river networks: A real case study. J. Hydrol. 2015, 523, 706–724. [Google Scholar] [CrossRef]
  228. Ulaby, F.T.; Batlivala, P.P.; Dobson, M.C. Microwave Backscatter Dependence on Surface Roughness, Soil Moisture, and Soil Texture: Part I—Bare Soil. IEEE Trans. Geosci. Electron. 1978, 16, 286–295. [Google Scholar] [CrossRef]
  229. de Leeuw, J.; Vrieling, A.; Shee, A.; Atzberger, C.; Hadgu, K.M.; Biradar, C.M.; Keah, H.; Turvey, C. The potential and uptake of remote sensing in insurance: A review. Remote Sens. 2014, 6, 10888–10912. [Google Scholar] [CrossRef]
  230. Mitidieri, F.; Papa, M.N.; Amitrano, D.; Ruello, G. River morphology monitoring using multitemporal sar data: Preliminary results. Eur. J. Remote Sens. 2016, 49, 889–898. [Google Scholar] [CrossRef]
  231. Hostache, R.; Chini, M.; Giustarini, L.; Neal, J.C.; Kavetski, D.; Wood, M.; Corato, G.; Pelich, R.; Matgen, P. Near-Real-Time Assimilation of SAR-Derived Flood Maps for Improving Flood Forecasts. Water Res. 2018, 54, 5516–5535. [Google Scholar] [CrossRef]
  232. Martinis, S.; Kuenzer, C.; Wendleder, A.; Huth, J.; Twele, A.; Roth, A.; Dech, S. Comparing four operational SAR-based water and flood detection approaches. Int. J. Remote Sens. 2014, 36, 3519–3543. [Google Scholar] [CrossRef]
  233. Long, S.; Fatoyimbo, T.E.; Policelli, F. Flood extent mapping for Namibia using change detection and thresholding with. Environ. Res. Lett. 2014, 9, 035002. [Google Scholar] [CrossRef]
  234. Notti, D.; Giordan, D.; Cal, F.; Pepe, A.; Zucca, F.; Galve, J.P. Potential and Limitations of Open Satellite Data for Flood Mapping. Remote Sens. 2018, 10, 1673. [Google Scholar] [CrossRef]
  235. Bovolo, F.; Bruzzone, L. A Split-Based Approach to Unsupervised Change Detection in Large-Size Multitemporal Images: Application to Tsunami-Damage Assessment. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1658–1670. [Google Scholar] [CrossRef]
  236. Chini, M.; Hostache, R.; Giustarini, L.; Matgen, P. A hierarchical split-based approach for parametric thresholding of SAR images: Flood inundation as a test case. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6975–6988. [Google Scholar] [CrossRef]
  237. Li, Y.; Martinis, S.; Wieland, M.; Schlaffer, S.; Natsuaki, R. Urban flood mapping using SAR intensity and interferometric coherence via Bayesian network fusion. Remote Sens. 2019, 11, 2231. [Google Scholar] [CrossRef]
  238. D’Addabbo, A.; Refice, A.; Lovergine, F.P.; Pasquariello, G. DAFNE: A Matlab toolbox for Bayesian multi-source remote sensing and ancillary data fusion, with application to flood mapping. Comput. Geosci. 2017, 112, 64–75. [Google Scholar] [CrossRef]
  239. D’Addabbo, A.; Refice, A.; Pasquariello, G.; Lovergine, F.P.; Capolongo, D.; Manfreda, S. A Bayesian Network for Flood Detection Combining SAR Imagery and Ancillary Data. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3612–3625. [Google Scholar] [CrossRef]
  240. Dasgupta, A.; Grimaldi, S.; Ramsankaran, R.A.A.J.; Pauwels, V.R.N.; Walker, J.P. Towards operational SAR-based flood mapping using neuro-fuzzy texture-based approaches. Remote Sens. Environ. 2018, 215, 313–329. [Google Scholar] [CrossRef]
  241. Liu, Z.; Li, G.; Mercier, G.; He, Y.; Pan, Q. Change Detection in Heterogenous Remote Sensing Images via Homogeneous Pixel Transformation. IEEE Trans. Image Process. 2018, 27, 1822–1834. [Google Scholar] [CrossRef]
  242. Benoudjit, A.; Guida, R. A novel fully automated mapping of the flood extent on sar images using a supervised classifier. Remote Sens. 2019, 11, 779. [Google Scholar] [CrossRef]
  243. Gong, M.; Zhao, J.; Liu, J.; Miao, Q.; Jiao, L. Change Detection in Synthetic Aperture Radar Images Based on Deep Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 125–138. [Google Scholar] [CrossRef]
  244. Li, Y.; Peng, C.; Chen, Y.; Jiao, L.; Zhou, L.; Shang, R. A Deep Learning Method for Change Detection in Synthetic Aperture Radar Images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5751–5763. [Google Scholar] [CrossRef]
  245. Geng, J.; Ma, X.; Zhou, X.; Wang, H. Saliency-Guided Deep Neural Networks for SAR Image Change Detection. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7365–7377. [Google Scholar] [CrossRef]
  246. Schumann, G.J.P.; Moller, D.K. Microwave remote sensing of flood inundation. Phys. Chem. Earth 2015, 83–84, 84–95. [Google Scholar] [CrossRef]
  247. Pulvirenti, L.; Chini, M.; Pierdicca, N.; Boni, G. Use of SAR data for detecting floodwater in urban and agricultural areas: The role of the interferometric coherence. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1532–1544. [Google Scholar] [CrossRef]
  248. Iervolino, P.; Guida, R.; Iodice, A.; Riccio, D. Flooding water depth estimation with high-resolution SAR. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2295–2307. [Google Scholar] [CrossRef]
  249. Chini, M.; Pulvirenti, L.; Pierdicca, N. Analysis and Interpretation of the COSMO-SkyMed Observations of the 2011 Japan Tsunami. IEEE Geosci. Remote Sens. Lett. 2012, 9, 467–471. [Google Scholar] [CrossRef]
  250. Chini, M.; Pelich, R.; Pulvirenti, L.; Pierdicca, N.; Hostache, R.; Matgen, P. Sentinel-1 InSAR Coherence to Detect Floodwater in Urban Areas: Houston and Hurricane Harvey as A Test Case. Remote Sens. 2019, 11, 107. [Google Scholar] [CrossRef]
  251. Li, Y.; Martinis, S.; Wieland, M. Urban flood mapping with an active self-learning convolutional neural network based on TerraSAR-X intensity and interferometric coherence. ISPRS J. Photogramm. Remote Sens. 2019, 152, 178–191. [Google Scholar] [CrossRef]
  252. Pulvirenti, L.; Pierdicca, N.; Boni, G.; Fiorini, M.; Rudari, R. Flood damage assessment through multitemporal COSMO-SkyMed data and hydrodynamic models: The Albania 2010 case study. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2848–2855. [Google Scholar] [CrossRef]
  253. Hostache, R.; Matgen, P.; Schumann, G.; Puech, C.; Hoffmann, L.; Pfister, L. Water level estimation and reduction of hydraulic model calibration uncertainties using satellite SAR images of floods. IEEE Trans. Geosci. Remote Sens. 2009, 47, 431–441. [Google Scholar] [CrossRef]
  254. Hoque, M.A.A.; Tasfia, S.; Ahmed, N.; Pradhan, B. Assessing spatial flood vulnerability at Kalapara Upazila in Bangladesh using an analytic hierarchy process. Sensors 2019, 19, 1302. [Google Scholar] [CrossRef] [PubMed]
  255. European Commission. Copernicus Market. Report February 2019; European Commission: Brussels, Belgium, 2019. [Google Scholar]
  256. European Space Agency. Copernicus in Action Fostering User Uptake of EO Services through the Copernicus Masters and the Space App Camps; European Space Agency: Paris, France, 2016. [Google Scholar]
  257. Sozzi, M.; Marinello, F.; Pezzuolo, A.; Sartori, L. Benchmark of Satellites Image Services for Precision Agricultural Use. In Proceedings of the European Conference on Agricultural Engineering, Wageningen, The Netherlands, 8–12 July 2018. [Google Scholar]
  258. Mikes, S.; Haindl, M.; Scarpa, G.; Gaetano, R. Benchmarking of Remote Sensing Segmentation Methods. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2240–2248. [Google Scholar] [CrossRef]
  259. Yokoya, N.; Ghamisi, P.; Xia, J.; Sukhanov, S.; Heremans, R.; Tankoyeu, I.; Bechtel, B.; Le Saux, B.; Moser, G.; Tuia, D. Open Data for Global Multimodal Land Use Classification: Outcome of the 2017 IEEE GRSS Data Fusion Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1363–1377. [Google Scholar] [CrossRef]
  260. Pacifici, F.; Del Frate, F.; Emery, W.J.; Gamba, P.; Chanussot, J. Urban Mapping Using Coarse SAR and Optical Data: Outcome of the 2007 GRSS Data Fusion Contest. IEEE Geosci. Remote Sens. Lett. 2008, 5, 331–335. [Google Scholar] [CrossRef]
  261. Xie, Z.; Chen, Y.; Lu, D.; Li, G.; Chen, E. Classification of land cover, forest, and tree species classes with Ziyuan-3 multispectral and stereo data. Remote Sens. 2019, 11, 164. [Google Scholar] [CrossRef]
  262. Space-Tec. European Earth Observation and Copernicus Midstream Market. Study; Space-Tec: Brussels, Belgium, 2013. [Google Scholar]
  263. Denis, G.; Claverie, A.; Pasco, X.; Darnis, J.P.; de Maupeou, B.; Lafaye, M.; Morel, E. Towards disruptions in Earth observation? New Earth Observation systems and markets evolution: Possible scenarios and impacts. Acta Astronaut. 2017, 137, 415–433. [Google Scholar] [CrossRef]
  264. Northern Sky Research. Satellite-Based Earth Observation, 10th ed.; Northern Sky Research: Cambridge, MA, USA, 2018. [Google Scholar]
Figure 1. Lifecycle of SAR data, from the acquisition to the generation of value-added products.
Figure 1. Lifecycle of SAR data, from the acquisition to the generation of value-added products.
Remotesensing 13 00604 g001
Figure 2. The Peirce semiotic triangle.
Figure 2. The Peirce semiotic triangle.
Remotesensing 13 00604 g002
Figure 3. Change detection in semi-arid Burkina Faso using multi-temporal SAR higher-level representations. (a) Standard SAR SLC image acquired during the wet season and (b) correspondent change-detection Level-1α product (R: interferometric coherence, G: wet season image, B: dry season image). The reference for the identification of changing patterns is the dry season acquisition.
Figure 3. Change detection in semi-arid Burkina Faso using multi-temporal SAR higher-level representations. (a) Standard SAR SLC image acquired during the wet season and (b) correspondent change-detection Level-1α product (R: interferometric coherence, G: wet season image, B: dry season image). The reference for the identification of changing patterns is the dry season acquisition.
Remotesensing 13 00604 g003
Figure 4. Comparison between bi-temporal representations. (a) Level-1α product. (b) RGB composite obtained as explained in [86].
Figure 4. Comparison between bi-temporal representations. (a) Level-1α product. (b) RGB composite obtained as explained in [86].
Remotesensing 13 00604 g004
Figure 5. Rendering variation of change-detection multi-temporal SAR images obtained by exchanging the positioning of the interferometric coherence and of the reference image for changes identification. Coherence loaded on (a) the red band and (b) the blue band.
Figure 5. Rendering variation of change-detection multi-temporal SAR images obtained by exchanging the positioning of the interferometric coherence and of the reference image for changes identification. Coherence loaded on (a) the red band and (b) the blue band.
Remotesensing 13 00604 g005
Figure 6. Fully multi-temporal change detection products. (a) Level-1β image. (b) Output of the REACTIV fusion technique. (c) Absolute change indicator map. (d) Second order log-cumulants map.
Figure 6. Fully multi-temporal change detection products. (a) Level-1β image. (b) Output of the REACTIV fusion technique. (c) Absolute change indicator map. (d) Second order log-cumulants map.
Remotesensing 13 00604 g006
Figure 7. Simplified graphical representation of the role played by forest features in the backscattering as a function of the wavelength.
Figure 7. Simplified graphical representation of the role played by forest features in the backscattering as a function of the wavelength.
Remotesensing 13 00604 g007
Figure 8. Co-polarized average EM response of forest and land features for different available sensors including ALOS-PALSAR (L-band), Sentinel-1 (C-band), TerraSAR-X (X-band) and NovaSAR (S-band).
Figure 8. Co-polarized average EM response of forest and land features for different available sensors including ALOS-PALSAR (L-band), Sentinel-1 (C-band), TerraSAR-X (X-band) and NovaSAR (S-band).
Remotesensing 13 00604 g008
Figure 9. Cross-polarized average EM response of forest and land features for different available sensors including ALOS-PALSAR (L-band), Sentinel-1 (C-band) and TerraSAR-X (X-band).
Figure 9. Cross-polarized average EM response of forest and land features for different available sensors including ALOS-PALSAR (L-band), Sentinel-1 (C-band) and TerraSAR-X (X-band).
Remotesensing 13 00604 g009
Figure 10. Deforestation detection in the Brazilian Amazon using ALOS-PALSAR Level-1α images. First row: Google Earth views of the study area acquired in (a) 2007 and (b) 2010. Second row: Level-1α images. Test images for changes identification acquired on (c) July 2008 and (d) September 2010. Third row: (e) normalized change index and (f) normalized texture drop index maps.
Figure 10. Deforestation detection in the Brazilian Amazon using ALOS-PALSAR Level-1α images. First row: Google Earth views of the study area acquired in (a) 2007 and (b) 2010. Second row: Level-1α images. Test images for changes identification acquired on (c) July 2008 and (d) September 2010. Third row: (e) normalized change index and (f) normalized texture drop index maps.
Remotesensing 13 00604 g010aRemotesensing 13 00604 g010b
Figure 11. Deforestation detection in the Brazilian Amazon using Sentinel-1 Level-1α images. First row: Landsat 8 images of the study area acquired in (a) April 2016 and (b) November 2019. Second row: Level-1α images. Test images for changes identification acquired on (c) February 2016 and (d) January 2020.
Figure 11. Deforestation detection in the Brazilian Amazon using Sentinel-1 Level-1α images. First row: Landsat 8 images of the study area acquired in (a) April 2016 and (b) November 2019. Second row: Level-1α images. Test images for changes identification acquired on (c) February 2016 and (d) January 2020.
Remotesensing 13 00604 g011
Figure 12. Separability of deforestation patterns from undisturbed forest and agricultural land in the plane defined by the normalized texture and backscattering span for (a) ALOS-PALSAR and (b) Sentinel-1 images.
Figure 12. Separability of deforestation patterns from undisturbed forest and agricultural land in the plane defined by the normalized texture and backscattering span for (a) ALOS-PALSAR and (b) Sentinel-1 images.
Remotesensing 13 00604 g012
Figure 13. Small reservoir detection using COSMO-SkyMed high-resolution Level-1α images. (a) Bi-temporal color composite. (b) Water index map. (c) Reservoir mask obtained via thresholding.
Figure 13. Small reservoir detection using COSMO-SkyMed high-resolution Level-1α images. (a) Bi-temporal color composite. (b) Water index map. (c) Reservoir mask obtained via thresholding.
Remotesensing 13 00604 g013
Figure 14. Ratio between the backscattering coefficients of typical rough terrains and water surfaces at L (orange line), S (green line), C (red line) and X (blue line). Geometric and dielectric parameters used for the simulations are defined in Table 3
Figure 14. Ratio between the backscattering coefficients of typical rough terrains and water surfaces at L (orange line), S (green line), C (red line) and X (blue line). Geometric and dielectric parameters used for the simulations are defined in Table 3
Remotesensing 13 00604 g014
Figure 15. Co-polarized average EM response of land and water features for different available sensors including ALOS-PALSAR (L-band), Sentinel-1 (C-band), TerraSAR-X (X-band) and NovaSAR (S-band). Each box represents the dispersion of data around their median values, depicted by the red bar. The edges indicate the 25th and 75th percentiles, respectively.
Figure 15. Co-polarized average EM response of land and water features for different available sensors including ALOS-PALSAR (L-band), Sentinel-1 (C-band), TerraSAR-X (X-band) and NovaSAR (S-band). Each box represents the dispersion of data around their median values, depicted by the red bar. The edges indicate the 25th and 75th percentiles, respectively.
Remotesensing 13 00604 g015
Figure 16. (a) Laaba basin statistics in the time frame June 2010–December 2011 relevant to surface area (upper frame) and retained water volume (lower frame). (b) Estimated water availability trend using hydrological modeling and comparison with SAR estimates. Source [82].
Figure 16. (a) Laaba basin statistics in the time frame June 2010–December 2011 relevant to surface area (upper frame) and retained water volume (lower frame). (b) Estimated water availability trend using hydrological modeling and comparison with SAR estimates. Source [82].
Remotesensing 13 00604 g016
Figure 17. Number of disasters per type in the period 1998–2017. Source UN [226].
Figure 17. Number of disasters per type in the period 1998–2017. Source UN [226].
Remotesensing 13 00604 g017
Figure 18. Different phases of a flood event.
Figure 18. Different phases of a flood event.
Remotesensing 13 00604 g018
Figure 19. Evolution of the flooded area in the Albufera natural reserve (Spain). Pre-event image (blue band) acquired on 6 April 2017. Post-event images (green channels) acquired on (a) 18 April 2017, (b) 4 August 2017 and (c) 14 December 2017.
Figure 19. Evolution of the flooded area in the Albufera natural reserve (Spain). Pre-event image (blue band) acquired on 6 April 2017. Post-event images (green channels) acquired on (a) 18 April 2017, (b) 4 August 2017 and (c) 14 December 2017.
Remotesensing 13 00604 g019
Figure 21. Global SAR (a) and optical (b) Earth observation market. Data for the SAR market have been extrapolated from [264]. Those for the optical market (segmented for sensor resolution LR–low resolution, MR– medium resolution, HR–high resolution and VHR–very high resolution) have been extracted from [255]. Data enclosed in the ovals refer to the market compound annual growth rate (CAGR).
Figure 21. Global SAR (a) and optical (b) Earth observation market. Data for the SAR market have been extrapolated from [264]. Those for the optical market (segmented for sensor resolution LR–low resolution, MR– medium resolution, HR–high resolution and VHR–very high resolution) have been extracted from [255]. Data enclosed in the ovals refer to the market compound annual growth rate (CAGR).
Remotesensing 13 00604 g021
Table 1. Summary of the reviewed change detection approaches.
Table 1. Summary of the reviewed change detection approaches.
MethodologySelected WorksApplication
Log-ratioRignot and van Zyl, 1993 [49], Bazi et al., 2005 [50],
Bruzzone et al., 2006 [52], Bazi et al., 2006 [62]
General purpose
Multi-temporal normalized band ratioAmitrano et al., 2017 [53], Cian et al., 2018 [54]Flood mapping
Information theoretical similarityInglada and Mercier, 2007 [55], Aiazzi et al., 2013 [56],
Su et al., 2015 [59], Conradsen et al., 2003 [60]
General purpose
Fully multi-temporal change indicatorsMartinez and Le Toan, 2007 [65], Bujor et al., 2004 [66],
Lombardo and Oliver, 2001 [57]
General purpose
Multi-sensor data fusionPoulain et al. 2011 [71], Brunner et al., 2010 [73]Urban areas
Polychronaki et al., 2013 [75], Reiche et al., 2015 [70]Forestry
Object-based processingAmitrano et al., 2018 [77]Water resources
Bi-temporal higher-level representationsAmitrano et al. 2015 [5], Amitrano et al. 2019 [24]General purpose
Dellepiane and Angiati, 2012 [86], Refice et al., 2014 [88]Flood mapping
Fully multi-temporal higher-level representationsAmitrano et al., 2016 [15], Colin-Koneniguer et al., 2018 [93]General purpose
Table 2. Summary of the reviewed literature about forestry categorized based on the application, the adopted methodology and the data exploited.
Table 2. Summary of the reviewed literature about forestry categorized based on the application, the adopted methodology and the data exploited.
ApplicationSelected WorksMethodologyData Exploited
Forest type classificationRanson et al., 1995 [130], Ranson and Sun, 1995 [131],
Pierce et al., 1998 [132]
Backscattering analysisC-band
DeforestationMermoz and LeToan, 2016 [137], Almeida-Filho et al., 2007 [139], Joshi et al., 2015 [145], Motohka et al., 2014 [143]Change detectionL-band cross-pol eventually coupled with co-pol
Lehmann et al., 2012 [69], Reiche et al., 2015 [70], 2013 [148], 2018 [154]L-band SAR and MS data fusion
Biomass estimationMermoz et al., 2015 [164], Cartus et al., 2012 [165],
Lucas et al., 2010 [166], Yu and Saatchi, 2016 [163]
Backscattering analysisL-band SAR, cross-pol
Forest fires detectionTanase et al., 2010 [185,192], Kalogirou et al., 2014 [184],
Imperatore et al., 2017 [191]
C- and L-band SAR, co- and/or cross-pol
Table 3. Geometric and dielectric parameters used to characterize terrain and water surfaces.
Table 3. Geometric and dielectric parameters used to characterize terrain and water surfaces.
Simulation ParameterTerrainWater
ε/ε0440
σ [S/m]0.0011
H0.750.8
s [m1−H]0.050.02
Table 4. Summary of the reviewed literature about water resources categorized based on the application, the adopted methodology and the data exploited.
Table 4. Summary of the reviewed literature about water resources categorized based on the application, the adopted methodology and the data exploited.
ApplicationSelected WorksMethodologyData Exploited
Flood mappingGong et al. [243], 2016; Li et al., 2019 [244], Geng et al., 2019 [245]Deep learningSAR short wavelength (X-, C-band), co-pol
Notti et al., 2018 [234], Bovolo and Bruzzone, 2007 [235],
Chini et al., 2017 [236]
Thresholding
Amitrano et al., 2018 [95], Dasgupta et al. [240]Fuzzy systems
Amitrano et al., 2019 [24]Semantic clustering
D’Addabbo et al., 2017 [238], 2016 [239], Li et al., 2019 [237] Bayes networks
Chini et al., 2019 [250], 2012 [249], Pulvirenti et al., 2016 [247], Li et al., 2019 [251], Coherent change detection
Benoudjit and Guida, 2019 [242], Liu et al., 2018 [241]Multisensor data fusionX-, C-band SAR and MS
Reservoirs mappingAmitrano et al., 2017 [53], 2014, [82], Heine et al., 2014 [213]ThresholdingSAR short wavelength (X-, C-band), co-pol, high-resolution
Amitrano et al., 2018 [77]Object-based
Reservoirs bathymetryAmitrano et al., 2014 [82], Zhang et al., 2016 [220]Regression analysis
Liebe et al., 2005 [222]Field surveys
Vanthof and Kelly, 2019 [221]Multi-source remote sensing data
Reservoir sedimentationAmitrano et al., 2014, [208], Prasad et al., 2018 [225]DEM analysisSAR short wavelength and DEM
Table 5. Pricing table for some of the SAR sensors today available.
Table 5. Pricing table for some of the SAR sensors today available.
SensorProduct TypeSwathResolutionPrice Per Image
TerraSAR-X Tandem-X PazSpotlightUp to 10 × 10 km2 Up to 0.25 mEUR 2125–3475
Stripmap30 × 50 km2 Up to 3 mEUR 1475
ScanSARUp to 270 × 200 km2 Up to 18.5 mEUR 875
COSMO-SkyMedSpotlight10 × 10 km2 Up to 0.9 mEUR 650
Stripmap40 × 40 km2 Up to 2.6 mEUR 300
ScanSAR Up to 200 × 200 km2 Up to 13.5 m × 23 m
RADARSAT-2SpotlightUp to 5 × 20 km2 Up to 1 mEUR ≈ 3500 − 3900
StripmapUp to 125 × 125 km2 Up to 3 mEUR ≈ 2700 − 5050
ScanSARUp to 500 × 500 km2 Up to 25 mEUR ≈ 2350
ALOS-2Spotlight10 × 10 km2 Up to 1 m × 3 mEUR ≈ 3200
StripmapUp to 70 × 70 km2 Up to 3 mEUR ≈ 1900
ScanSARUp to 355 × 490 km2 Up to 44.2 m × 56.7 m EUR ≈ 650
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop