Next Article in Journal
Application of Multi-Sensor Satellite Data for Exploration of Zn–Pb Sulfide Mineralization in the Franklinian Basin, North Greenland
Next Article in Special Issue
FCM Approach of Similarity and Dissimilarity Measures with α-Cut for Handling Mixed Pixels
Previous Article in Journal
An Evaluation of Forest Health Insect and Disease Survey Data and Satellite-Based Remote Sensing Forest Change Detection Methods: Case Studies in the United States
Previous Article in Special Issue
A Cloud Detection Method for Landsat 8 Images Based on PCANet
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Diffuse Skylight as a Surrogate for Shadow Detection in High-Resolution Imagery Acquired Under Clear Sky Conditions

Ecosystem Management, School of Environmental and Rural Science, University of New England, Armidale, NSW 2351, Australia
*
Author to whom correspondence should be addressed.
Submission received: 1 July 2018 / Revised: 24 July 2018 / Accepted: 25 July 2018 / Published: 27 July 2018
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)

Abstract

:
An alternative technique for shadow detection and abundance is presented for high spatial resolution imagery acquired under clear sky conditions from airborne/spaceborne sensors. The method, termed Scattering Index (SI), uses Rayleigh scattering principles to create a diffuse skylight vector as a shadow reference. From linear algebra, the proportion of diffuse skylight in each image pixel provides a per pixel measure of shadow extent and abundance. We performed a comparative evaluation against two other methods, first valley detection thresholding (extent) and physics-based unmixing (extent and abundance). Overall accuracy and F-score measures are used to evaluate shadow extent on both Worldview-3 and ADS40 images captured over a common scene. Image subsets are selected to capture objects well documented as shadow detection anomalies, e.g., dark water bodies. Results showed improved accuracies and F-scores for shadow extent and qualitative evaluation of abundance show the method is invariant to scene and sensor characteristics. SI avoids shadow misclassifications by avoiding the use of pixel intensity and the associated limitations of binary thresholding. The method negates the need for complex sun-object-sensor corrections, it is simple to apply, and it is invariant to the exponential increase in scene complexity associated with higher-resolution imagery.

Graphical Abstract

1. Introduction

The effects of shadow and illumination are intrinsic to optical remote sensing imagery of the Earth’s surface. Shadow effects are unique in imagery because their presence is caused by the obstruction of light from a source of illumination, not by the properties of a surface material [1]. In imagery, shadow presents as a change in radiance value that adds inaccuracies to already complex Earth scenes captured in high resolution. Accurate detection and quantification of shadow is critical for its removal and results in improvements for image classification procedures [2].
To understand the physics of shadows is an important prerequisite for any detection technique. The physics of shadows are detailed in Funka-Lea and Bajcsy [1], where two shadow types are described: cast and self-cast. Both types are caused by the obstruction of light by an object, but cast shadow does not present on the obstructing object whereas self-cast shadow does. Cast shadow has two characteristics that are defined as umbra and penumbra [1] and shown in Figure 1.
Umbra is that part of cast shadow where the light source is completely obstructed compared to penumbra where the light source is partially obstructed [1,4]. Dare [4] estimated penumbra location and extent using Equation (1) for an urban building on a flat horizontal ground, shown in Figure 2.
w = H ( 1 T a n ( e ε 2 ) 1 T a n ( e + ε 2 ) )
where w is penumbra width, building height H , sun elevation e and angular width of sun ε .
From Dare [4], a building height of 50 m and sun elevation of 38° produces a shadow width of 1.23 m, the equivalent of a Worldview-3 or Ikonos high-resolution image pixel. An increase in building height or a decrease in sun elevation will expand penumbra width.
What can be seen from Figure 1 is that shadow is not object radiance nor is it binary, it is a continuum caused by variations in illumination [5]. Quantifying the illumination in image pixels provides the metric to normalise pixels to full sun and skylight and that is advantageous for shadow detection [6]. Characterizing illumination is complex due to variations in atmospheric conditions, sun-object-sensor geometry, scene topography, and surface material properties [7,8,9,10]. All shadow detection methods must consider illumination so further examination of these effects follows.
For multispectral imagery covering the visible to SWIR spectrum, atmospheric conditions can be corrected using outputs from radiative transfer modelling software packages such as 6S [11] or MODTRAN [12]. They provide estimates of the bottom of atmosphere reflectance for horizontal surfaces at a given altitude by considering gaseous absorption, molecular scattering and aerosol profiles from varying angles relating to sun-object-sensor geometry [7,12]. These corrections are essential for pre-processing of spaceborne imagery that is characterised by atmospheric effects [8]. Such corrections are performed with software packages such as ATCOR [13] and FLAASH which incorporates MODTRAN [12]. Similar corrections using radiative transfer properties have been applied to airborne ADS40 imagery [14].
Topographic and surface material effects in Landsat TM/ETM+ and SPOT HRG images can be corrected with a priori information of terrain (digital surface models), Bi-Direction Distribution Functions and scene acquisition details [7,8,15]. Shadow detection in the field of computer vision and pattern recognition use invariant colour models to negate surface morphology (topography) and view angles [16]. Invariant colour models decompose image pixels into intensity (brightness) and colour (chromaticity), where intensity is used as the metric for shadow detection [17,18].
Surface material properties are modelled using Bidirectional Reflectance Distribution (BRD) functions [7,8,9,10]. A BRDF is material specific, so methods to reduce illumination effects using BRDF require a priori reflectance characteristics of all scene materials. In remote sensing, most BRDF corrections are applied to small-scale sensors, so for high-resolution imagery, this may present an exponential increase in a priori knowledge of scene materials and their BRD functions [10].
For shadow, illumination is less complex, because it is dominated by diffuse skylight [6,19,20]. For umbra, irradiance is diffuse skylight only, while penumbra is irradiated by both diffuse skylight and direct irradiance. The resulting effect on image pixels has two identifiable traits: (1) pixel radiance decreases with shadow (intensity); and (2) as shadow increases, the pixel radiance is dominated by increasingly shorter wavelengths (diffuse skylight). Shadow detection methods that use radiance (intensity) are complicated by two situations: high reflectance materials in shadow and naturally dark objects in sunlit regions [21]. We know that shadow pixels contain a mixture of diffuse skylight radiance and the small radiance response from the material [6,20]. Furthermore, the abundance of diffuse skylight is correlated with shadow depth, so shadow can be quantified indirectly from diffuse skylight [6,17,22]. Quantifying the abundance of diffuse skylight in a pixel requires unmixing the proportion of diffuse skylight independent of surface material response.
There are comprehensive reviews of shadow detection techniques by Sanin, Sanderson and Lovell [17], Adeline, Chen, Briottet, Pang and Paparoditis [21], Rüfenacht, Fredembach and Süsstrunk [22], Shahtahmassebi, Yang, Wang, Moore and Shen [2] and Ullah, Ullah, Uzair and Rehman [18], and the techniques within are reviewed considering the illumination-shadow model. The reviews consider one or more of the four shadow detection taxa: (1) Property-based methods (histogram thresholding, invariant colour models and object segmentation); (2) Model-based/geometrical-based methods (require a priori knowledge of scene relief and sun location); (3) Physics-based methods (illumination, atmospheric conditions and surface material properties considered); and, (4) Machine Learning (supervised and unsupervised clustering techniques).
Property-based methods consist of either intensity thresholding from shadow-invariant NIR spectra or colour and intensity indices derived from invariant colour models. Intensity thresholding utilises longer wavelength spectra, particularly NIR, as illumination complexity is mostly immune to diffuse skylight effects [23]. In the shadow detection review by Adeline, Chen, Briottet, Pang and Paparoditis [21], the intensity thresholding approach of Nagao, et al. [24] was the best performer, with a mean F-Score of 92.5 across six high-resolution images.
Invariant colour models normalise illumination by decoupling pixel responses into intensity and colour using RGB colour space transformations. Colour, or ‘colour constancy’ is invariant to illumination, while intensity reflects brightness (radiance). These methods exploit the extracted colour, hue and intensity information to detect shadow [16,17,25]. In colour aerial imagery, diffuse skylight in shadow areas is visually apparent in the colour constant transformation appearing as strong blue-violet in colour [25]. The strength of these approaches is that the complexity of illumination is normalised from inherent image properties, which avoids corrections using complex physics and scene-specific references. Invariant colour models do not overcome atmospheric effects and assume Lambertian properties for all surface materials.
Model-based methods require a priori scene surface morphology data and sun-sensor-object geometry to delineate shadows. Surface morphology data includes Digital Surface Models (DSMs) that are of a spatial scale equal to or better than that of the scene. Shadow is determined by geometric calculations, success of these models is reliant on accurate DSM and scene object spatial co-registration. Also, the models require exact image sun zenith and azimuth angles at image acquisition time. The advantage of the models is they use DSMs and no spectral information, and so are classified as invariant to illumination. The disadvantage is that outputs are a binary mask that have no information on abundance of shadow and diffuse skylight.
Physics-based methods correct for illumination variation by using atmospheric correction, sun-object-sensor geometry, terrain and surface material properties to estimate at-ground reflectance estimates. To define reflectance, nomenclature from [26] states that reflection is the process where electromagnetic flux that is incident on a surface, leaves that surface without a change in wavelength. Reflectance is then defined as the fraction of reflected flux from that surface, i.e., out/in. As previously mentioned, 6S, ATCOR and FLAASH estimate reflectance from a horizontal surface, with ATCOR being the one approach capable of considering topographic or surface material effects. Dymond and Shepherd [7] used an algorithm with SPOT 4 imagery corrected to reflectance with 6S and a digital elevation model to derive a full topographically corrected image that gives the appearance of being ‘flattened’. Flood, Danaher, Gill and Gillingham [8] extended this work further and introduced surface material corrections using BRDFs on Landsat TM/ETM+ and SPOT HRG imagery. Flood, Danaher, Gill and Gillingham [8] achieved excellent results by reducing errors due to BRDF by up to 50%. The advantages of these two approaches are that physics-based information is used to correct for illumination. The methods of Flood, Danaher, Gill and Gillingham [8] and Dymond and Shepherd [7] are applied to small-scale images with digital elevation models that produce a topographic flattening result. However, unlike ATCOR the 6S and FLAASH methods do not consider the blue-skew effects of diffuse skylight in umbra, and shadow areas remain.
Another physics-based approach is the de-shadowing algorithm by Adler-Golden, Matthew, Anderson, Felde and Gardner [6] applied to an AVIRIS scene atmospherically corrected to reflectance using FLAASH. By considering shadow as an endmember, a matched filter vector that considers shadow as ‘black’ is used to identify shadow abundance per pixel (shadow fraction). A spectrally dependent ratio of direct-to-diffuse illumination is then applied to rebalance the reflectance values. The shadow fraction is then applied to the rebalanced reflectance data to de-shadow the image. Richter and Müller [20] refined the approach to improve estimates of the shadow fraction. Topographic effects and surface material properties were not applied in these works, but could be corrected with a tool such as ATCOR [13]. The derivation of the matched filter vector for shadow is constrained to scene-dependent measures, i.e., scene covariance matrix and mean spectral value. In a more recent study by Schläpfer, et al. [27] a cast shadow algorithm is used for deriving aerosol optical thickness distribution measures to improve atmospheric correction in high-resolution imagery. The cast shadow algorithm presented considers the diffuse skylight effect by exploiting the scattering effect via visible band ratios. The advantages of these methods are that atmospheric corrections and diffuse skylight are considered, with excellent results achieved.
Machine learning techniques include supervised and unsupervised classifications. Supervised classification uses a priori information about surface cover types to generate a land cover map. User defined training sites (pixels) are selected and assigned to multiple land cover classes. An algorithm is then applied to assign all other pixels in the image to one of these classes [28]. Unsupervised classification algorithms assign pixels to a user defined number of classes that are statistically generated without a priori information. Unless both approaches are applied to illumination corrected data, classification accuracy is wholly reliant on scene resolution and characteristics for shadow detection, and as such, they are no longer considered.
In summary, all models possess varying strengths and limitations in their approach to shadow detection. Property-based approaches work well and have the advantage of not requiring in situ or ancillary data, but they are scene dependent and require user-specified thresholding. Physics-based methods address the issue of diffuse skylight but are mostly applied to small-scale imagery that requires atmospheric correction and ancillary data, such as terrain and sensor BRDF models. These models are rarely applied to higher-resolution images that possess exponential increases in scene complexity that require commensurate scale BRDF models and accurately co-registered terrain data. Model-based methods work well for delineating cast shadow extents where accurate co-registered terrain information is available but do not measure shadow abundance. Machine learning techniques are repeatable across varying scales of imagery, but their performance is reliant upon scene dependency and data dimensionality. An ideal approach to shadow detection should address the limitations of the discussed models while drawing on the efficiencies and strengths from all.
The method proposed uses diffuse skylight effect as a surrogate for shadow detection, an approach that differs to methods that avoid diffuse skylight effects. Diffuse skylight is quantified under clear sky conditions using a Rayleigh scattering model [29,30] that is converted to a reference vector. All image pixels are converted to unit vectors, where linear algebra determines the proportion of diffuse skylight in each pixel using spectral similarity (colour), not intensity. The authors test the hypothesis that image pixels can have their shadow extent and abundance quantified by the proportion of a diffuse skylight vector present in each pixel vector. This method uses only vector orientation to delineate shadow, and thus eliminates the complex physics-based corrections of pixel magnitude associated with sun-object-scene geometry. Importantly, a unit vector pixel is a linear scaled version of an original image pixel and the spectral angle does not change post-scaling. This method is invariant to scene-sensor characteristics, surface material properties and provides a physics-based estimate for shadow abundance. The method is simple to apply, and is thus beneficial for operational analysis of high-resolution imagery.
The method has been implemented using IDL code and Research Systems, Inc. ENVI software, but the simplicity of the method permits application using just the standard interface tools available in commercial remote sensing software.

2. Materials and Methods

2.1. Method

The method proposed here is called Scattering Index (SI) and consists of (1) the creation of a diffuse skylight vector using a Rayleigh scattering model, (2) normalisation of high-resolution imagery by conversion to unit vector form, (3) applying vector projection (linear algebra) to quantify proportion of diffuse skylight ‘colour’ in each normalised image pixel, and (4) defining a threshold for proportion to determine shadow extent and abundance.
Diffuse skylight causes a non-linear effect across spectra and is modelled using an exponentially decreasing function λx where the Angstrom-law exponent x determines the relative degree of scattering for wavelength λ [6,29,31]. The physics principle is that smaller wavelengths are scattered exponentially more than longer wavelengths.
The Rayleigh model (λ−4) represents scattering in a ‘clear sky’, but the function can be fitted to varying degrees of sky haze by adjusting the exponent value [29,31]. As the exponent increases from −4 to 0, scattering across all spectra approaches equality, which is modelled by the zero exponent (λ°). The Rayleigh model is best suited to images that are atmospherically corrected or where atmospheric effects are negligible and hence simulate ‘clear sky’.
The derivation of a diffuse skylight vector is adopted from Chavez [31], which creates a relative scattering vector, as shown in Equation (2).
S λ i = λ i 4 i = 1 n λ i 4
where S λ i is the relative percentage of scatter for the ith wavelength, λ = wavelength and n is the total number of wavelengths.
Table 1 is an example of a relative Rayleigh scattering model applied to an artificial sensor with visible and near infrared wavelengths (λ) to create a diffuse skylight vector using Equation (2).
As shown in Table 1, longer wavelengths have exponentially less scattering, resulting in their decreased sensitivity to diffuse skylight. Therefore, selection of appropriate wavelengths for a diffuse skylight vector is crucial and should be restricted to visible wavelengths where sensitivity is measurable. The sensitivity effect can be observed when varying wavelength ranges are applied to the Rayleigh model. This is shown in Figure 3 using 50 nm wavelength increments across the visible-NIR spectrum.
The inclusion of longer wavelengths reduces the relative scatter of shorter wavelengths, with the converse also being true. A similar trait exists for the diffuse skylight signature, where inclusion of longer wavelengths produces a ‘flatter’ signature and lower overall scatter range, whereas exclusion of long wavelengths increases the signature curve and total scatter range. In applying the Rayleigh model to a sensor, the centres of the band wavelength ranges define the diffuse skylight vector.
Image pixels are represented as vectors in n-dimensional spectral space. In vector representation, two characteristics can uniquely describe any pixel, orientation (colour) and magnitude (intensity). Obtaining a single measure for magnitude is simple using Pythagoras’ theorem, but a single measure for orientation is exponentially more complex due to the multiple reference axes. To alleviate this complexity, colour can be represented by a normalised image pixel vector. Normalisation using an invariant colour model approach has been applied for two reasons, (1) to aid visual detection of diffuse skylight, and (2) to simplify the linear algebra formulation as per Equation (3)
p ^ = p | | p | |
where p ^ is unit vector, p is pixel vector and | | p ^ | | is the pixel vector magnitude. Unit vector magnitude | | p ^ | | is always 1.
When pixel vector magnitude is normalised to one, then pixel signatures are represented as shape only and can be compared directly. Furthermore, magnitude is a continuum, and its use for shadow detection relies upon user or scene specific ‘dark pixel’ thresholding. This method uses only vector orientation to delineate shadow. Importantly, a unit vector pixel is a linear scaled version of an original image pixel and the spectral angle does not change post-scaling.
The spectral angle between any two vectors is determined by the spectral angle Equation (4)
θ ( s , p ) = Cos 1 ( i = 1 n s i p i i = 1 n s i 2 · i = 1 n p i 2 ) ,
where θ is spectral angle, s is the ith band value of the Rayleigh scatter vector, p is the ith value of the pixel band and n is number of bands.
The spectral angle θ ( s , p ) is not necessary to determine proportion because the term of equation [4] inside parentheses is the linear algebra formula for vector projection, i.e., how much of p is s, or vice versa. The relationship is shown in Equation (5)
V P ( s , p ) = Cos θ ( s , p ) ,
where VP is the vector projection s in p. VP value range is 0–1.
From linear algebra, V P ( s , p ) has a value range from 0 to 1, where 0 is an orthogonal vector and 1 is a collinear vector. Equation (5) is used to determine the proportion of diffuse skylight vector in each pixel vector and a threshold value in the 0–1 range is required to define shadow extent.
Here we adopt a diffuse skylight threshold that is independent of illumination variations and scene characteristics. If the diffuse scattering effect was non-existent, then shadow could be truly defined as ‘black’ and in vector form, this would mean collinearity with a grey vector where all spectra are equal and zero. Because we use diffuse skylight to define shadow, the proportion of shadow is commensurate with an angular rotation of theta away from the grey vector towards the diffuse scatter vector. Figure 4 illustrates this using a simple 3-band image example of a Rayleigh scatter vector and a simple visualisation for n-dimensions.
The spectral angle between a Rayleigh diffuse skylight vector ( s ) and the grey vector ( g ) defines the radial distance between diffuse skylight and true black shadow respectively. This distance is defined as vector proportion ( V P ( s , g ) ) using Equation (5) and is used as the threshold to delineate shadow extent and abundance i.e., all pixels within θ are shadowed.
First, we simplify (5) by substituting the unit vector image
S I p = p · ^ s | | s | |
where S I p (Scatter Index) is proportion of skylight in pixel p. p ^ is the unit vector form of image pixels, s is the diffuse skylight vector and | | s | | is magnitude of the diffuse skylight vector.
We finalise the method (7) by confining Equation (6) to the threshold for shadow pixels
S I ( p , a ) = p · ^ s | | s | | ,   for   S I ( p , a ) V P ( s , g ) ,
where S I ( p , a ) is proportion or abundance of shadow in each shadowed pixel.

Method Comparison

The SI shadow detection method is evaluated against the SMACC (Spectral Maximum Angle Convex Cone) detection algorithm [32] and Nagao’s histogram threshold on image brightness [24]. The methods are applied to both Worldview-3 imagery corrected to the bottom of atmosphere reflectance [26] using FLAASH and non-reflectance ADS40 airborne imagery in 8-bit DN values.
Both SMACC and Nagao methods were assessed in a comprehensive review by Adeline, Chen, Briottet, Pang and Paparoditis [21], where Nagao’s method proved overall best with an F-score average of 92.5, and SMACC proved less effective with an average F-score of 83.9. The shadow detection approach of Richter and Müller [20] was the second-best performer (average F-score 90), but requires one band from the near infrared and two bands from shortwave infrared, which excludes both ADS40 and Worldview-3 sensors. SMACC is the second option because it is a physics-based approach, it is an automated tool in ENVI software and shadow is referenced as a ‘black’ vector as per the physics-based approach of Richter and Müller [20]
SMACC analysis was run using 30 endmembers and a ‘positive only’ constraint that produces a fractional abundance of shadow per pixel. For Worldview-3 imagery, this is directly comparable with the SI method in terms of shadow abundance. However, the ADS40 imagery was corrected to reflectance, resulting in the histogram of shadow abundance being left skewed and thus nullifying the use of the SI threshold for SMACC. Instead, for the SMACC method, we implemented a supervised threshold based on Otsu [33] on the ADS40 abundance image to create a binary mask and offset the histogram skew. Therefore, for the purposes of this paper, we could only compare abundances for the Worldview image subsets.
To generate the Nagao shadow mask, original images were smoothed using a 5 × 5 pixel window as per the edge-preserving smoothing algorithm of Nagao and Matsuyama [34]. A brightness or intensity grey level image is derived from the smoothed image that had a grey level threshold applied in ENVI software using the Otsu [33] histogram thresholding parameter to finalise the mask. The Worldview-3 image has eight bands, so the blue, green, red and NIR1 bands were selected to replicate Nagao’s method.
To compare performance on different sensors the methods were applied to high-resolution imagery from the airborne ADS40 sensor (DN values) and spaceborne Worldview-3 imagery corrected to reflectance using FLAASH after MODTRAN [12]. The diffuse skylight vector (λ−4) was created from the centres of all spectral bands in the visible spectrum for both images. The angles between the diffuse skylight and grey vectors were used as shadow thresholds and shown as both degrees and vector proportion in Table 2.

2.2. Materials

All analyses were performed on a common extent for images from the airborne Leica ADS40 and spaceborne Worldview-3 satellite sensors. The image scene was of an area 1513 × 1196 m on the southern limits of the Woolgoolga township on the mid north coast of New South Wales, Australia. The scene centre is located by coordinates 153°11′55 E and 30°7′27 S. The scene consists of urban, industrial, cleared grassland, sewage treatment plant, horticulture, ocean, beach, sealed/unsealed roads, littoral rainforest, heath and wet/dry sclerophyll eucalypt forest. Three subsets of equal size (260 × 170 m) were selected for evaluation as they vary and contain targets known for misclassification of shadow. Both Worldview-3 and ADS40 images and the selected subsets ‘Urban Forest’, ‘Dark Water-Forest’ and ‘Industrial’ are shown in Figure 5.
The Worldview-3 image was acquired under clear-sky conditions (visibility > 100 km) on 4th April, 2016 at 10:07 am (Australian Eastern Standard Time) with sun azimuth and zenith angles of 28°24′06″ and 36°03′38″, respectively. The image was acquired in panchromatic and eight multispectral bands, but only the multispectral bands were used for shadow detection. The spatial resolution was 1.2 metre ground separation distance, with sensor bands and their centres shown in Table 3.
Worldview-3 Imagery was supplied as DN values that were converted to top of atmosphere radiance by applying supplied gains and offsets to each pixel digital number. Top of atmosphere radiance was then converted to reflectance using ENVI’s FLAASH atmospheric correction module after MODTRAN 5 [35] using a scene average elevation of 113 m above sea level calculated from LiDAR data.
The ADS40 imagery was acquired on the 11th of September 2009 at 0945 h (Australian Eastern Standard Time) with sun azimuth and zenith angles as 41°46′24″ and 46°32′28″ respectively. The imagery is part of the New South Wales state government’s Land & Property Information (LPI) Standard Coverage Imagery Capture Program [36]. The colour/near infra-red four-band imagery (428–492 nm, 533–587 nm, 608–662 nm, 833–887 nm) was captured using a Leica ADS40 sensor with resultant 50 cm ground sampling distance. Processing included ortho-rectification, colour-matching and the joining of overlapping image strips. No atmospheric correction procedures were applied, and image values are 8-bit DN.

2.3. Performance Comparison

Most shadow detection performance measures have been applied using an independent pre-analysis shadow extent boundary [17]. Here we have used a post-analysis approach that targets shadow detection anomalies. For shadow extent, Overall Accuracy and F-Scores are used to quantify performance, after Adeline, Chen, Briottet, Pang and Paparoditis [21]. Table 4 shows the formulation of Overall Accuracy and F-Scores [37].
In this post-analysis approach, reference points are created within a shadow extent created by combining shadow masks from all three detection methods. The method is described stepwise, as follows:
  • Convert binary shadow masks (1 = Shadow, 0 = No shadow) from all three methods to vector polygons maintaining pixel edge boundaries, i.e., unsmoothed conversion.
  • Spatially combine all three binary shadow masks to create eight unique classes, i.e., all agree non-shadow, only SI shadow, only SMACC shadow, only Nagao shadow, SI & SMACC shadow, Nagao & SMACC shadow and SI & Nagao shadow.
  • Remove ‘all agree no shadow’ class, as no statistical power is gained from its inclusion.
  • Create a dataset of point references using a stratified random sampling strategy using the remaining seven classes as strata.
  • The number of points per class is determined by the ratio of total class area to the number of polygons in the class. The ratio is then doubled to approximate a binomial distribution sample size of N = 76 based on an expected map accuracy of 95% and allowable error of 5% [26].
  • Using original images only, perform an independent visual assessment of all reference points and assign values 1 for shadow and −1 for non-shadow. Importantly, shadow value 1 captures self-cast and cast shadow areas.
  • Overlay reference points with each method’s shadow mask and determine F-Score and Overall Accuracies.
Table 5 shows the overall counts and shadow versus non-shadow assignments for reference points from the three subsets in both images
For shadow abundance, it is logistically difficult to obtain physics-based reference data that is both synchronised to image acquisition and sun/sensor geometry. Evaluation of shadow abundance can only be performed on Worldview subsets and is done by visual comparison between the SI and SMACC methods. Comparison is done using histograms created from shadow pixels regarded as shadow by both SI and SMACC methods.

3. Results

Shadow Extent

Table 6 provides the results for Overall Accuracies and includes the mean for each method for both ADS40 and Worldview-3 images. The mean of Overall Accuracies for each image subset is included and shows that shadow detection for Urban Forest and Industrial subsets is similar (70.9% and 76.7% respectively) and the Dark Water Forest subset is significantly lower (50.5%).
Table 7 provides F-score results in same format as Table 6. Table 8 is the final Overall Accuracy ranking and Table 9 is the final F-score rankings. These rankings are the means of Method Mean from both images. For example, in Table 8 the SI score of 79.2 is the mean of the SI Method Mean scores (77.1 and 81.3) from Table 6.
Figure 6 displays the results for the ADS40 image subsets using three class display shown in the figure key. The three classes are (1) all three methods agree there is shadow, (2) only Nagao or SMACC detect shadow, and (3) only SI detects shadow. Figure 7 displays the results for Worldview-3 image subsets with the same key from Figure 6.
The fraction of shadow abundance in a pixel is only produced by SI and SMACC methods so their abundance histograms are used for comparison. Figure 8 below is for SI v SMACC comparison of shadow abundance in ADS40 where the shaded area indicates SI shadow as per threshold (≥0.89) for ADS40 imagery in Table 2.
Figure 9 is the same format as Figure 8 using Worldview-3 imagery.

4. Discussion

The results for the three methods in the Overall Accuracy from Table 8 show separation of performance with measures of 79.2%, 63.3% and 55.6% for SI, Nagao and SMACC respectively. SI is best performer with a 15.9% improvement on Nagao, followed by SMACC with 7.7% below that of Nagao. The order of performance and degree of separation is similar for F-scores in Table 9 with results of 82.3, 72.1 and 65.1 for SI, Nagao and SMACC respectively. Differences in F-scores show separation of 10.2 between SI and Nagao and 7.0 separating Nagao and SMACC. For all methods, the overall performance shows an F-score range of 82.3–65.1 compared to the review by Adeline, Chen, Briottet, Pang and Paparoditis [21], which reported an F-score-range of 92.5–83.9. In the review by Sanin, Sanderson and Lovell [17], a mean of two detections, foreground object and shadow, resulted in a score range of 90–65 using the Recall measure shown in Table 4. In both reviews, the cast shadow references used to measure performance are generated prior to analysis. The performance measure in this paper, the stratified random sampling strategy, is applied post-analysis and provides three important properties: (1) there is replication of strata, (2) replication increases probability of capturing both cast and self-cast shadow, and (3) references get located on features that are problematic for shadow detection. The lower performance range of 82.3–65.1 reflects the increased rigour in the performance measure used. Therefore, with the design of performance measures considered, the SI method is the best performing shadow detection method followed by Nagao and SMACC respectively.
The mean Overall Accuracies and F-scores for all subsets in both images are shown in Table 6 and Table 7. Again, SI performed better in all subsets in both images followed by Nagao and SMACC respectively. In all six subsets measured, the SI method outperformed all others with one exception. Nagao outperformed SI in the ‘Urban Forest’ subset in ADS40 imagery with small Overall Accuracy and F-score separations of 3.5 and 3.7 respectively. The only variable is the ADS40 imagery, so it cannot be deduced that Nagao’s method is overall better-performing across the ‘Urban Forest’ subsets.
For the SI and SMACC methods only, the abundance of shadow in each pixel is graphically shown in Figure 8 and Figure 9 by comparing the histograms of abundance outputs for ADS40 and Worldview-3 images, respectively. For an illuminated image with a relatively high sun-sensor angle and minimal terrain effects, most pixels will be shadow free resulting in a minority of shadow pixels. Under the same image conditions, an ideal histogram of shadow abundance would have a minimum shadow threshold located at the far right of the abundance axis (x axis) with low pixel counts above that threshold. This pattern is observed in Figure 8 and Figure 9, highlighted by the SI thresholds shown as shaded areas. The ADS40 histograms in Figure 8 show the SMACC histogram (red line) has a left-skewed distribution. All other abundances in both figures reflect a right-skewed distribution. The left-skewed ADS40 histogram highlights the scene-sensor dependency of the SMACC algorithm [6,20]. The SMACC algorithm treats shadow as a ‘null’ or ‘black’ vector and its abundance in a pixel relies upon the resulting linear combination of all endmembers in that pixel [32]. With SMACC, endmembers are selected from n-dimensional data space and the four band ADS40 imagery has low dimensionality, resulting in poor shadow abundance estimates. The SI method is invariant to scene-sensor characteristics and provides a physics-based estimate for shadow abundance.
The image subsets are selected to target shadow detection difficulties with comparative results of their subset means provided in Table 6 and Table 7. The ‘Industrial’ subset (F-score 85.2) is dominated by homogeneous human structures that are impervious and produce cast shadows of high contrast. The ‘Urban forest’ subset (F-score 81.0) combines this with natural schlerophyll forest containing cast and self-cast shadows and the ‘Dark Water Forest’ subset (F-score 53.4) is mostly water and schlerophyll forest. From both Overall Accuracy and F-scores, shadow detection accuracy improves on scenes dominated by human structures and performance reduces with the increased heterogeneity of natural landscapes.
The SI method detects finer shadow detail and discriminates between dark and naturally dark objects as seen by the sky-blue symbol colour in the higher resolution ADS40 imagery (50 cm) shown in Figure 6. Examples of finer shadow detection are backyard fence shadows (Figure 6b), car shadows (Figure 6b,c) and factory building shadows (Figure 6f) that are missed by Nagao and SMACC. SI excludes some schlerophyll forest shadowing in the ‘Urban Forest’ subset (Figure 6b), explaining the better F-score and accuracy for Nagao for that subset. However, SI detects both self-cast tree shadows in forest and house shadows (Figure 6d) where Nagao and SMACC do not. Detection errors for both Nagao and SMACC are of the dark water body and sewage holding ponds in Figure 6d. Nagao selects the dark water body because it is low intensity and SMACC assumes a ‘black’ vector because of similar low values in all bands. SI uses diffuse skylight colour and that ignores the naturally dark body because of a low proportion (<0.89) of diffuse skylight. SI is the only method that detects tree shadow on the dark water body’s eastern perimeter shown by the ‘All Agree’ dark blue symbology.
The performance rankings of methods for Worldview-3 imagery does not change but their errors of omission and commission vary from the errors of ADS40 imagery subsets. The differences are that Worldview-3 sensor is spaceborne, imagery corrected to reflectance and appears darker, resolution is coarser at 1.2 metres and image acquisition five years after ADS40. Temporal changes can be seen in Figure 7c, where there is an increase in aquatic vegetation cover on the dark water body and one sewage holding pond is mostly evaporated. In the ‘Urban Forest’ subset (Figure 7b), SI detects both forest and tree crown shadows with low agreement from Nagao and SMACC. All three methods incorrectly detect the grey bitumen road and residential houses. Nagao and SMACC detect all the bitumen road and all houses with darker roofing, reinforcing the limitations of using intensity and the ‘black’ vector. SI errors are minor in comparison with detection of one blue house roof (middle left, Figure 7b) and a small portion of the bitumen road, shown as ‘All Agree’ in the top right corner (Figure 7b). A similar scenario occurs on the ‘Industrial’ subset (Figure 7f), where grey bitumen road is detected by Nagao and SMACC. SI does not detect bitumen but does detect pitches of factory roofs that have reduced illumination because the pixels present as diffuse skylight. In the ‘Dark Water Forest’ subset (Figure 7d), Nagao and SMACC detect all the darker portions of water body aquatic vegetation where SI detects only the darkest portions. The sewage holding pond in the top of the subset is very dark and all detect this as shadow. For SI, this is not an error, because the sensor detects a small diffuse skylight response over a very dark body due to its very low radiance. This is shown in Figure 10 using a spectral signature for the holding pond derived from the mean of all pixels and visible bands.
The SI method detects bodies that contain or present with diffuse skylight effects but is not an error of commission. Shadow detection is required for shadow removal so when using SI for detection there is a basis for the diffuse skylight proportion from each pixel to be quantified, unmixed and removed. The proportion of diffuse skylight is the weighting coefficient in the un-mixing process and because of the Angstrom-law exponent in the scattering model, that process is non-linear [6]. Therefore, for a dark water body or blue house roof, removal of the diffuse skylight is weighted and the resulting effect on signatures will be negligible.
Application of the SI method is simple, computationally efficient and can be implement via scripting or standard remote sensing software applications. The Nagao method required IDL coding to run the edge preserving smoothing algorithm and that proved computationally expensive having to calculate the mean and variance for nine separate sub-windows per pixel [34]. SMACC is easily run from ENVI and only requires knowledge of the convex cone method of determining endmembers so the end user selects the appropriate unmixing constraint [32]. The SI method is efficient to implement needing only one IDL script that calculates the diffuse skylight vector (Equation (1)), a unit vector transformation (Equation (2)), vector projection (Equation (4)) and application of the shadow threshold (Equation (6)).
In summary, the results show that the SI method clearly outperforms the shadow detection methods of Nagao and SMACC. Using diffuse skylight as a direct measure of shadow extent is physics-based and simply quantified using basic linear algebra with only the pixel vector’s orientation or ‘colour’ to measure shadow. SI avoids thresholding limitations associated with pixel brightness/intensity and is invariant to scene characteristics and sun-object-sensor geometries. The SI method can be applied to imagery with higher resolutions and the resulting exponential increase in scene complexities. Since SI is quantifiable, it provides a physics base upon which to develop a shadow removal approach using non-linear unmixing.

5. Conclusions

A simple shadow detection and abundance method that uses a diffuse skylight vector has been developed. The SI method shows improved accuracy over high resolution imagery when evaluated against Nagao and SMACC detection methods. Overall Accuracy and F-score results show misclassifications are still present in areas known for detection anomalies, i.e., objects in shadow confused with illuminated dark objects. Furthermore, shadow detection in scenes dominated by human-built structures proved more accurate with the SI method than those of natural landscapes with less homogeneity. Qualitative results suggest that diffuse skylight is an effective and repeatable method for measuring shadow abundance, but this hypothesis requires further quantitative assessment.

Author Contributions

Conceptualisation: M.C. and L.K.; Methodology, Coding, Analysis and Writing: MC; Review and Editing: MC and LK; LK supervised the research.

Funding

This research received no external funding.

Acknowledgments

Thanks to Greg Windsor, Doug Herrick and Kate Wilkinson of the New South Wales Land & Property Information for supplying the ADS40 imagery and associated technical specifications as part of the Standard Coverage Imagery Capture Program. Also, thanks to Sylvia Michael and Aaron Aeberli of Geoimage (Brisbane, Australia) for their assistance with Worldview-3 image calibration.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Funka-Lea, G.; Bajcsy, R. Combining color and geometry for the active, visual recognition of shadows. In Proceedings of the IEEE Fifth International Conference on Computer Vision, Cambridge, MA, USA, 20–23 June 1995; pp. 203–209. [Google Scholar] [Green Version]
  2. Shahtahmassebi, A.; Yang, N.; Wang, K.; Moore, N.; Shen, Z. Review of shadow detection and de-shadowing methods in remote sensing. Chin. Geogr. Sci. 2013, 23, 403–420. [Google Scholar] [CrossRef]
  3. Arévalo, V.; González, J.; Ambrosio, G. Shadow detection in colour high-resolution satellite images. Int. J. Remote Sens. 2008, 29, 1945–1963. [Google Scholar] [CrossRef]
  4. Dare, P.M. Shadow analysis in high-resolution satellite imagery of urban areas. Photogramm. Eng. Remote Sens. 2005, 71, 169–177. [Google Scholar] [CrossRef]
  5. Drew, M.S.; Finlayson, G.D.; Hordley, S.D. Recovery of chromaticity image free from shadows via illumination invariance. In Proceedings of the IEEE Workshop on Color and Photometric Methods in Computer Vision (ICCV’03), Nice, France, 12 October 2003; pp. 32–39. [Google Scholar]
  6. Adler-Golden, S.M.; Matthew, M.W.; Anderson, G.P.; Felde, G.W.; Gardner, J.A. Algorithm for de-shadowing spectral imagery. In Proceedings of the International Symposium on Optical Science and Technology, Seattle, WA, USA, 8 November 2002; International Society for Optics and Photonics: Bellingham, WA, USA, 2002; pp. 203–210. [Google Scholar]
  7. Dymond, J.R.; Shepherd, J.D. Correction of the topographic effect in remote sensing. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2618–2619. [Google Scholar] [CrossRef]
  8. Flood, N.; Danaher, T.; Gill, T.; Gillingham, S. An operational scheme for deriving standardised surface reflectance from Landsat TM/ETM+ and SPOT HRG imagery for eastern Australia. Remote Sens. 2013, 5, 83–109. [Google Scholar] [CrossRef]
  9. Li, F.; Jupp, D.; Thankappan, M. Using high resolution DSM data to correct the terrain illumination effect in Landsat data. In Proceedings of the 19th International Congress on Modelling and Simulation, Perth, Australia, 12–16 December 2011; pp. 12–16. [Google Scholar]
  10. Wen, J.; Liu, Q.; Xiao, Q.; Liu, Q.; You, D.; Hao, D.; Wu, S.; Lin, X. Characterizing Land Surface Anisotropic Reflectance over Rugged Terrain: A Review of Concepts and Recent Developments. Remote Sens. 2018, 10, 370. [Google Scholar] [CrossRef]
  11. Vermote, E.F.; Tanré, D.; Deuze, J.L.; Herman, M.; Morcrette, J.-J. Second simulation of the satellite signal in the solar spectrum, 6S: An overview. IEEE Trans. Geosci. Remote Sens. 1997, 35, 675–686. [Google Scholar] [CrossRef]
  12. Berk, A.; Adler-Golden, S.; Ratkowski, A.; Felde, G.; Anderson, G.; Hoke, M.; Cooley, T.; Chetwynd, J.; Gardner, J.; Matthew, M. Exploiting MODTRAN radiation transport for atmospheric correction: The FLAASH algorithm. In Proceedings of the Fifth International Conference on Information Fusion, Annapolis, MD, USA, 8–11 July 2002; IEEE: New York, NY, USA, 2002; pp. 798–803. [Google Scholar]
  13. Richter, R. Correction of satellite imagery over mountainous terrain. Appl. Opt. 1998, 37, 4004–4015. [Google Scholar] [CrossRef] [PubMed]
  14. Beisl, U. Reflectance calibration scheme for airborne frame camera images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 1–7. [Google Scholar] [CrossRef]
  15. Li, F.; Jupp, D.L.; Thankappan, M. Issues in the application of Digital Surface Model data to correct the terrain illumination effects in Landsat images. Int. J. Dig. Earth 2015, 8, 235–257. [Google Scholar] [CrossRef]
  16. Blauensteiner, P.; Wildenauer, H.; Hanbury, A.; Kampel, M. On colour spaces for change detection and shadow suppression. In Proceedings of the Computer Vision Winter Workshop, Telč, Czech Republi, 6–8 February 2006; pp. 117–123. [Google Scholar]
  17. Sanin, A.; Sanderson, C.; Lovell, B.C. Shadow detection: A survey and comparative evaluation of recent methods. Pattern Recognit. 2012, 45, 1684–1695. [Google Scholar] [CrossRef] [Green Version]
  18. Ullah, H.; Ullah, M.; Uzair, M.; Rehman, F. Comparative study: The evaluation of shadow detection methods. Int. J. Video Image Process. Netw. Secur. 2010, 10, 1–7. [Google Scholar]
  19. Makarau, A.; Richter, R.; Muller, R.; Reinartz, P. Adaptive shadow detection using a blackbody radiator model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2049–2059. [Google Scholar] [CrossRef]
  20. Richter, R.; Müller, A. De-shadowing of satellite/airborne imagery. Int. J. Remote Sens. 2005, 26, 3137–3148. [Google Scholar] [CrossRef]
  21. Adeline, K.; Chen, M.; Briottet, X.; Pang, S.; Paparoditis, N. Shadow detection in very high spatial resolution aerial images: A comparative study. ISPRS J. Photogramm. Remote Sens. 2013, 80, 21–38. [Google Scholar] [CrossRef]
  22. Rüfenacht, D.; Fredembach, C.; Süsstrunk, S. Automatic and accurate shadow detection using near-infrared information. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1672–1678. [Google Scholar] [CrossRef] [PubMed]
  23. Fredembach, C.; Susstrunk, S. Illuminant Estimation and Detection Using Near-Infrared; IS&T/SPIE Electronic Imaging: San Jose, CA, USA; International Society for Optics and Photonics: Bellingham, WA, USA, 2009; p. 11. [Google Scholar]
  24. Nagao, M.; Matsuyama, T.; Ikeda, Y. Region extraction and shape analysis in aerial photographs. Comput. Gr. Image Process. 1979, 10, 195–223. [Google Scholar] [CrossRef]
  25. Tsai, V.J. A comparative study on shadow compensation of color aerial images in invariant color models. IEEE Trans. Geosci. Remote Sens. 2006, 44, 1661–1671. [Google Scholar] [CrossRef]
  26. Standards, U.S.N.B.O.; Nicodemus, F.E. Geometrical Considerations and Nomenclature for Reflectance; US Department of Commerce, National Bureau of Standards: Washington, DC, USA, 1977; Volume 160. [Google Scholar]
  27. Schläpfer, D.; Hueni, A.; Richter, R. Cast Shadow Detection to Quantify the Aerosol Optical Thickness for Atmospheric Correction of High Spatial Resolution Optical Imagery. Remote Sens. 2018, 10, 200. [Google Scholar] [CrossRef]
  28. Jensen, J.R.; Lulla, K. Introductory Digital Image Processing: A Remote Sensing Perspective; Prentice Hall: Upper Saddle River, NJ, USA, 1987. [Google Scholar]
  29. Slater, P.N.; Doyle, F.; Fritz, N.; Welch, R. Photographic systems for remote sensing. Manu. Remote Sens. 1983, 1, 231–291. [Google Scholar]
  30. Chavez, P.S. Image-based atmospheric corrections-revisited and improved. Photogramm. Eng. Remote Sens. 1996, 62, 1025–1035. [Google Scholar]
  31. Chavez, P.S. An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data. Remote Sens. Environ. 1988, 24, 459–479. [Google Scholar] [CrossRef]
  32. Gruninger, J.H.; Ratkowski, A.J.; Hoke, M.L. The Sequential Maximum Angle Convex Cone (SMACC) Endmember Model; Defense and Security, International Society for Optics and Photonics: Bellingham, WA, USA, 2004; pp. 1–14. [Google Scholar]
  33. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  34. Nagao, M.; Matsuyama, T. Edge preserving smoothing. Comput. Gr. Image Process. 1979, 9, 394–407. [Google Scholar] [CrossRef]
  35. Berk, A.; Anderson, G.; Acharya, P.; Modtran, E.S. 5.2. 0.0 User’s Manual Air Force Research; Lab., Space Veh. Directorate, Air Force Materiel Command: Bedford, MA, USA, 2008; pp. 01731–03010. [Google Scholar]
  36. NSW Land & Property Information. ADS40 Standard Forward Program. Available online: http://spatialservices.finance.nsw.gov.au/mapping_and_imagery/aerial_imagery (accessed on 13 December 2014).
  37. Sokolova, M.; Japkowicz, N.; Szpakowicz, S. Beyond accuracy, F-score and ROC: A family of discriminant measures for performance evaluation. In Australasian Joint Conference on Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2006; pp. 1015–1021. [Google Scholar]
Figure 1. Shadow definitions adopted from Arévalo, et al. [3].
Figure 1. Shadow definitions adopted from Arévalo, et al. [3].
Remotesensing 10 01185 g001
Figure 2. Penumbra width extent from sun elevation, building height and angular width of the Sun. Adopted from Dare [4].
Figure 2. Penumbra width extent from sun elevation, building height and angular width of the Sun. Adopted from Dare [4].
Remotesensing 10 01185 g002
Figure 3. Rayleigh scatter vector signatures across the visible and near infrared spectrum wavelengths. Signature variation is caused by increasing or decreasing the number of wavelengths used to derive the Rayleigh scatter vector.
Figure 3. Rayleigh scatter vector signatures across the visible and near infrared spectrum wavelengths. Signature variation is caused by increasing or decreasing the number of wavelengths used to derive the Rayleigh scatter vector.
Remotesensing 10 01185 g003
Figure 4. 3D example showing the spectral angle rotation θ from grey vector to a 3-band Rayleigh scatter vector. (a) Is a 3D visualisation of the relationship between grey vector and diffuse scatter vector and (b) an equivalent for n-dimensions without the n axis shown.
Figure 4. 3D example showing the spectral angle rotation θ from grey vector to a 3-band Rayleigh scatter vector. (a) Is a 3D visualisation of the relationship between grey vector and diffuse scatter vector and (b) an equivalent for n-dimensions without the n axis shown.
Remotesensing 10 01185 g004
Figure 5. Study area showing images and target subset areas. (a) ADS40 image, (b) Worldview-3 image. Urban Forest, Dark Water-Forest and Industrial subsets are used for shadow target detection and are shown in red.
Figure 5. Study area showing images and target subset areas. (a) ADS40 image, (b) Worldview-3 image. Urban Forest, Dark Water-Forest and Industrial subsets are used for shadow target detection and are shown in red.
Remotesensing 10 01185 g005aRemotesensing 10 01185 g005b
Figure 6. Shadow detection results for ADS40 image subsets. Binary shadow masks from SI, Nagao and SMACC methods are combined into three display classes. Subsets (a,c,e) are original images and subsets (b,d,f) are respective shadow detection results.
Figure 6. Shadow detection results for ADS40 image subsets. Binary shadow masks from SI, Nagao and SMACC methods are combined into three display classes. Subsets (a,c,e) are original images and subsets (b,d,f) are respective shadow detection results.
Remotesensing 10 01185 g006
Figure 7. Shadow detection results for Worldview-3 image subsets. Binary shadow masks from SI, Nagao and SMACC methods are combined into three display classes. Subsets (a,c,e) are original images and subsets (b,d,f) are respective shadow detection results.
Figure 7. Shadow detection results for Worldview-3 image subsets. Binary shadow masks from SI, Nagao and SMACC methods are combined into three display classes. Subsets (a,c,e) are original images and subsets (b,d,f) are respective shadow detection results.
Remotesensing 10 01185 g007
Figure 8. Shadow abundance comparison between SI and SMACC shadow detection methods applied to ADS40 imagery. Shaded area indicates SI threshold range (≥0.89) for shadow.
Figure 8. Shadow abundance comparison between SI and SMACC shadow detection methods applied to ADS40 imagery. Shaded area indicates SI threshold range (≥0.89) for shadow.
Remotesensing 10 01185 g008
Figure 9. Shadow abundance comparison between SI and SMACC shadow detection methods applied to Worldview-3 imagery. Shaded area indicates SI threshold range (≥0.85) for shadow.
Figure 9. Shadow abundance comparison between SI and SMACC shadow detection methods applied to Worldview-3 imagery. Shaded area indicates SI threshold range (≥0.85) for shadow.
Remotesensing 10 01185 g009
Figure 10. Mean spectral signature of a dark holding pond from the visible wavelength bands showing low reflectance values and higher blue wavelength response.
Figure 10. Mean spectral signature of a dark holding pond from the visible wavelength bands showing low reflectance values and higher blue wavelength response.
Remotesensing 10 01185 g010
Table 1. Relative Rayleigh scattering model (λ−4) applied to visible and NIR wavelengths. Percentage of total scatter is used to create a diffuse skylight vector. A worked example is the Green wavelength where 1.22−4 = 0.448 resulting in 0.448/1.756 = 0.26 (or 26%).
Table 1. Relative Rayleigh scattering model (λ−4) applied to visible and NIR wavelengths. Percentage of total scatter is used to create a diffuse skylight vector. A worked example is the Green wavelength where 1.22−4 = 0.448 resulting in 0.448/1.756 = 0.26 (or 26%).
Wavelength (λ nm/Relative to Blue) Relative Rayleigh Function (λ−4)Rayleigh Scattering Model (% of Total Scatter)
Blue(450/1)157
Green(550/1.22)0.44826
Red(650/1.44)0.22913
Near Infrared(850/1.88)0.0784
Total1.756
Table 2. Diffuse skylight vectors and SI shadow thresholds for ADS40 and Worldview images.
Table 2. Diffuse skylight vectors and SI shadow thresholds for ADS40 and Worldview images.
Image Visible Bands UsedDiffuse Skylight Vector (λ−4)SI THRESHOLD
Degrees (°) Proportion   V P ( s , g )
ADS401,2,3[0.577, 0.263, 0.159]28.100.89
Worldview-31,2,3,4,5[0.418, 0.261, 0.148, 0.099, 0.071]32.430.85
Table 3. Worldview-3 sensor bands showing bandwidths and their wavelength centres in nanometres.
Table 3. Worldview-3 sensor bands showing bandwidths and their wavelength centres in nanometres.
Worldview-3 Bandsλ (nm)λ Centre
1 Coastal400–452426
2 Blue448–510479
3 Green518–586552
4 Yellow590–630610
5 Red632–692662
6 RedEdge706–746726
7 NIR1770–895832.5
8 NIR2860–1040950
Table 4. Formulation for overall accuracy and F-score.
Table 4. Formulation for overall accuracy and F-score.
Predicted PositivePredicted Negative
Reference PositiveTPFN
Reference NegativeFPTN
Precision   ( P ) = TP TP + FP
Recall   ( R ) = TP TP + FN
Overall   Accuracy = TP + TN TP + FP + TN + FN
F score = 2 × R × P R + P
Table 5. Counts of shadow/non-shadow reference points used in image subsets for Worldview-3 and ADS40.
Table 5. Counts of shadow/non-shadow reference points used in image subsets for Worldview-3 and ADS40.
SubsetsWorldview-3ADS 40
Shadow (1)Non-Shadow (−1)Total PointsShadow (1)Non-Shadow (−1)Total Points
Urban Forest743510973881
Dark Water-Forest356510052106158
Industrial752510012810138
Table 6. Overall accuracy results for all three methods on both ADS40 and Worldview-3 images that includes method mean and image subset mean.
Table 6. Overall accuracy results for all three methods on both ADS40 and Worldview-3 images that includes method mean and image subset mean.
Image SubsetWorldview-3ADS 40 Image Subset Mean
SINagaoSMACCSINagaoSMACC
Industrial79.079.069.087.079.067.476.7
Urban Forest82.467.062.470.474.169.170.9
Dark Water – Forest70.046.041.086.734.824.750.5
Method Mean77.164.057.581.362.653.7
Table 7. F-score results for all three methods on both ADS40 and Worldview-3 images that includes method mean and image subset mean.
Table 7. F-score results for all three methods on both ADS40 and Worldview-3 images that includes method mean and image subset mean.
Image SubsetWorldview-3ADS 40 Image Subset Mean
SINagaoSMACCSINagaoSMACC
Industrial85.785.380.092.687.679.885.2
Urban Forest88.777.574.281.084.480.081.0
Dark Water – Forest68.850.943.876.947.232.853.4
Method Mean81.171.266.083.573.164.2
Table 8. Final rankings for Overall Accuracy calculated from Method Mean in Table 6.
Table 8. Final rankings for Overall Accuracy calculated from Method Mean in Table 6.
Final Ranking (Overall Accuracy)
SI79.2
Nagao63.3
SMACC55.6
Table 9. Final rankings for F-score calculated from Method Mean in Table 7.
Table 9. Final rankings for F-score calculated from Method Mean in Table 7.
Overall Ranking (F-Score)
SI82.3
Nagao72.1
SMACC65.1

Share and Cite

MDPI and ACS Style

Cameron, M.; Kumar, L. Diffuse Skylight as a Surrogate for Shadow Detection in High-Resolution Imagery Acquired Under Clear Sky Conditions. Remote Sens. 2018, 10, 1185. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10081185

AMA Style

Cameron M, Kumar L. Diffuse Skylight as a Surrogate for Shadow Detection in High-Resolution Imagery Acquired Under Clear Sky Conditions. Remote Sensing. 2018; 10(8):1185. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10081185

Chicago/Turabian Style

Cameron, Mark, and Lalit Kumar. 2018. "Diffuse Skylight as a Surrogate for Shadow Detection in High-Resolution Imagery Acquired Under Clear Sky Conditions" Remote Sensing 10, no. 8: 1185. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10081185

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop