Next Article in Journal
Canopy Hyperspectral Sensing of Paddy Fields at the Booting Stage and PLS Regression can Assess Grain Yield
Previous Article in Journal
Efficient SfM for Oblique UAV Images: From Match Pair Selection to Geometrical Verification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Depth from Satellite Images: Depth Retrieval Using a Stereo and Radiative Transfer-Based Hybrid Method

1
Commonwealth Scientific and Industrial Research Organisation (CSIRO), Data 61, 65 Brockway Road, Floreat, WA 6014, Australia
2
CSIRO Oceans and Atmosphere, GPO Box 1700, Canberra, ACT 2601, Australia
*
Author to whom correspondence should be addressed.
Submission received: 25 May 2018 / Revised: 27 July 2018 / Accepted: 2 August 2018 / Published: 8 August 2018

Abstract

:
Satellite imagery is increasingly being used to provide estimates of bathymetry in near-coastal (shallow) areas of the planet, as a more cost-effective alternative to traditional methods. In this paper, the relative accuracy of radiative-transfer and photogrammetric stereo methods applied to World View 2 imagery are examined, using LiDAR bathymetry and towed video as ground truth, and it is demonstrated, with a case study, that these methods are complementary; where one method might have limited accuracy, the other method often has improved accuracy. The depths of uniform, highly-reflective (sand) sea bed are better estimated with a radiative transfer-based method, while areas where there is high visual contrast in the scene, as identified by using a local standard deviation measure, are better estimated using the photogrammetric technique. In this paper, it is shown that a hybrid method can give a potential improvement in accuracy of more than 50% (from 2.84 m to 1.38 m RMSE in the ideal case) compared to either of the two methods alone. Metrics are developed that can be used to characterize regions of the scene where each technique is superior, realizing an improved overall depth accuracy over either method alone of between 16.9% and 19.7% (demonstrating a realised RMSE of 2.36 m).

Graphical Abstract

1. Introduction

In the study of the world’s oceans, bathymetry is a key variable that offers researchers immediate information regarding habitats [1] and ocean currents [2]. Used in combination with ocean color modelling in near-coastal and estuarine regions, it provides important measures of the effect of land-based human activities on the health of these environments [3]. Satellite observations of oceans are now recognized [4] as a practical source of information for water quality, bathymetry, and benthic habitat mapping when air- or ship-borne surveys are either too costly or impractical to implement.
While radiative-transfer derived bathymetry from single images has been practiced and researched for many years [5,6], the production by the German company EOMAP of a Landsat-derived bathymetric map of the Great Barrier Reef has attracted media attention [7] to the availability of these techniques, and efforts are being made to quantify their accuracy for hydrographic purposes [8]. The new wave of satellites, including Worldview 2 [9], Landsat 8 [10], and the European Sentinel satellites [11], provide increased opportunities for low-cost, broad-scale bathymetry with increasing accuracy.
The notion of photogrammetric bathymetry from aerial sources has been known for many years [12], but it has not attracted widespread acceptance in the coastal mapping community, although it has been used to map shallow, rocky riverbeds in New Zealand [13,14], along with the related technique of structure-from-motion [15]. The ASTER instrument [16] has been used to infer the presence of sub-marine sand waves using stereo images of sun glitter [17], but it is not able to be used to directly measure underwater features because the ASTER stereo images are composed from infrared wavelengths that do not have sufficient water penetration for this application.
Since the early days of attenuation-based depth-from-color, it has been noted that areas of the seafloor with highly-reflective, uniform known cover (such as sand) have the highest accuracies for estimates of bathymetry [5,18]. Similarly, it is well known that areas of high image contrast, where stereo matching works well, have the best accuracy for the photogrammetric approach [19,20].
The concept of using both of these methods together in order to create the most accurate bathymetric map possible is not entirely novel, indeed the German company GAF [21] has recently begun offering a bathymetry product that is based on using both the satellite stereo and spectral attenuation approaches. Feurer et al. [14] have investigated the use of high-resolution aerial photography for estimating bathymetry of river beds, using a case study to evaluate both a photogrammetric and a light attenuation-based approach. However, they did not investigate the potential of a hybrid method as we propose here. In general there is a lack of quantitative analysis in the literature at present.
In this paper, we demonstrate a hybrid method, showing the potential for producing a higher-accuracy bathymetric map than either approach by itself.

2. Materials and Methods

A site within the Marmion Marine Park (31.81°S, 115.71°E) off the coast of Perth, Western Australia was chosen as the area for our exploration of using these techniques in a hybrid method. The study region features depths between 2 m and 20 m with a variety of underwater geography, including limestone reefs running parallel to the coastline, underwater limestone platforms, and exposed sandy regions. Figure 1a shows an overview of the study area, which is just off the coast of Perth, Western Australia. Figure 1b shows a 2009 Landsat TM image of the park, with the outline of the study area shown as a red box. Some of the contrasting undersea features are visible, which is the basis for satellite-derived bathymetry from stereo. There are also many sandy uniform regions, which will allow a comparison of the two methods over different cover types.

2.1. Satellite Images

For either of the methods, it is desirable to have clear, glint-free satellite images with a flat sea state. For the radiative transfer approach, a nearly perpendicular look angle provides superior results [22]. For the stereo approach, the second image should be clear and as glint free as possible, preferably acquired simultaneously or close to simultaneously with the first [23]. The nearly perpendicular point angle helps to simplify the geometry of the refraction correction and the epipolar search region [24,25].
Both methods are tested with Worldview 2 (WV2, [9]) data, which is supplied with a resolution of 2 m in 8 bands, with red, green, blue, and the “coastal band” (400 nm) the most relevant bands for the current study. Figure 2 shows WV2 scenes. Compared to Landsat TM5, more of the underwater landscape is visible in the WV2 imagery due to WV2’s superior dynamic range and spatial resolution. While Landsat 8 has far better dynamic range compared to earlier Landsats, it was not considered here as there were no images contemporary to the WV2 stereo pair.
For the attenuation-derived bathymetry, WV2 data were acquired in June 2012. This is a mid-winter image, and thus potentially suffers from low solar angles and a rough sea-state, but was found to be free of these effects and thus suitable for deriving physics-based depths.
For the photogrammetric approach, the stereo pair of WV2 images were acquired in December 2009, approximately two minutes apart, from the satellite’s downwards and backwards pointing cameras. While the 2009 images could have also been used for the Semi-Analytical Model for Unmixing and Concentration Assessment (SAMBUCA) method, the 2012 image was deemed superior for the purpose (and thus for this study) according to criteria derived in [22].
Both sets of imagery were de-glinted [26] and the 2012 imagery for the radiative transfer model was atmospherically corrected, using a correction that also accounted for the effect of the air-water interface [3].
The photogrammetric method was implemented on the WV2 Band 2 (blue) rather than the Pan or coastal bands. The coastal band was not used, as it was quite noisy and did not show the features as clearly as the blue band. It is also documented that it only penetrates deepest in pure water [27]. The PAN band was avoided in order to make the results have a 2 m resolution, and thus be more directly comparable to the attenuation-derived bathymetry. It was also observed to show less of the underwater features, as its spectral response tapers off in the green area of the spectrum. Figure 2 shows the deglinted blue band stereo pair and the deglinted multispectral image used in the paper.

2.2. LiDAR Bathymetry

LiDAR depth data [28] were collected by Fugro LADS (Kidman Park, SA) with their Mark II instrument, in a survey conducted for WA Transport in April 2009 off the coast of Western Australia from Two Rocks to Cape Naturaliste, see Table 1. The LiDAR waveform data were processed by Fugro Pty Ltd. to create depth maps, by measuring the time difference between LiDAR pulses returned from the surface of the water and the seafloor. The LiDAR data are broadly accepted within the hydrography community as being ground truth, since they are of known, high accuracy [29]. The LiDAR are collected at approximately 4.5 m point spacing, and this is super-sampled here to 2 m for comparison with the satellite-derived depths. The LiDAR ground truth covered the full extent of the area being considered here.
LiDAR reflectivity, calculated using CSIRO-developed techniques, is used as an additional input to segment the image into different cover types (Section 2.4), so that different classes can be assessed for accuracy. The bathymetry maps were co-registered to the WV2 using manually selected control points and a cubic warp. The LiDAR bathymetry is shown in Section 3.

2.3. Towed Video

Towed video is from a CSIRO survey in March 2014 [1]. The video is time and location stamped in frame, as shown in Figure 3. The videos allow the sand and non-sand masks created in Section 2.4 to be validated, ensuring their accuracy.

2.4. Creating a Sand/Non-Sand Mask

In order to examine the accuracy of radiative-transfer-based and stereo bathymetry over different cover-types, we created a map of sand and non-sand pixels. Using the covariates of depth, WV2 bands 1–4, and LiDAR reflectivity [1], we employed the following procedure to classify sand and non-sand pixels:
A selection of 34 homogeneous training sites are manually selected at various depths and spatial locations throughout the scene. Assuming that the training sites all belong to different seafloor cover types, a canonical variate analysis [30] is performed on these groups, and their scores are plotted. The training sites are assigned to 10 super-classes according to where they cluster on the CV plots. These are “natural” clusters in the training sites and are not given labels. The pixels in the image are each assigned to the classes above according to their maximum likelihood of belonging to that class. Using ground truth video and visual inspection of the imagery, each of the classes is then assigned to sand-only, non-sand only, or mixed classes (containing both sand and non-sand).
The method closely follows procedures that are typically applied to terrestrial land-cover classification problems, such as [31]. The intention is not to create a perfectly classified map, but rather to measure the accuracy of the bathymetry in pixels which are fairly certain to be sand compared to those that are fairly certain to not be sand. For this reason, mixed classes were acceptable, provided good populations that had a high probability of being sand or non-sand were identified.

2.5. SAMBUCA Method

Deriving bathymetry from ocean color is a relatively established technique [5] that exploits the exponential attenuation of light through the water column in order to estimate the depth of optically shallow water. Noting that optically shallow observations are composed of mixtures of water column and seafloor colours, these techniques approximate the water column contribution from observations of deep water areas [32,33]. Since then, many similar empirical techniques (where algorithms are derived from the regression of imagery on ground-truth depth measurements), have been, and continue to be, employed [18,34]. In recent years there has been a trend in the literature towards more physics-based or semi empirical models [35], to better deal with the water column optical complexity.
The Semi-Analytical Model for Unmixing and Concentration Assessment (SAMBUCA; [36,37]) is a semi-empirical model that is designed to allow the simultaneous retrieval of depth and inherent optical properties from hyperspectral images of optically shallow water bodies. It extends the work of Lee et al. [35], by including a mixing parameter for bottom substrates (also proposed by Hedley and Mumby, [38]) and allows mixtures of water constituents to be estimated for each pixel. SAMBUCA uses a forward-modelling approach and candidate spectral libraries to predict bottom type and depth simultaneously with the inherent optical properties (IOPs) of the water column.
For a full explanation of the details of its implementation, the reader is referred to the references above. Briefly, SAMBUCA employs an inversion/optimization method by Lee et al. [35,39]. where the analytical expression for r r s was previously derived for an optically shallow water body by Maritorena et al. [40]:
r r s = r r s d p + e κ d H ( ( q ρ 1 + ( 1 q ) ρ 2 ) e κ b H r r s d p e κ c H )
with:
  • r r s the subsurface remote-sensing reflectance (just below the waterline);
  • r r s d p the subsurface remote-sensing reflectance of the infinitely deep water column;
  • κ d the vertical attenuation coefficient for diffuse down-welling light;
  • κ b the vertical attenuation coefficient for diffuse up-welling light originating from the bottom;
  • κ C the vertical attenuation coefficient for diffuse upwelling light originating from each layer in the water column;
  • ρ j for j = 1 , 2 the bottom reflectance for two different substrates;
  • q is the proportion of substrate 1 (so 1 q is the proportion of substrate 2); and
  • H is the length of the water column through which the light is passing.
In the SAMBUCA algorithm, Equation (1) is modified using semi-analytical relationships derived by Lee et al. [35,39], relating the absorption coefficients and deep water reflectance to five independent variables associated with various IOPs. IOP variables related to chlorophyll and colored dissolved organic matter were fixed in this analysis to simplify the search space, reducing the number of unknown variables from 10 to 5.
Candidate values for the coefficients are used as initial estimates in the semi-analytic model, and this is subsequently compared with remotely sensed measurements and the parameters are adjusted, via least squares non-linear optimization, until they best match the observations. The method is originally designed for hyperspectral data, so in general for multispectral data, as employed here, a variety of assumptions must be made about the IOPs and substrate reflectance in order to achieve a robust result [27]. The errors attributable to the reduced spectral bandwidth of the WV2 instrument are explored in some detail with simulated sand spectra in [41]. Of particular relevance is that nearly no light penetrates for wavelengths larger than near infrared, so these bands are of limited use, which reduces the number of variables that can be simultaneously solved for when using a pixel-wise approach.
The application of hyperspectral inversion methods for multispectral satellite imagery has several limitations [42] such as the increased width of spectral bands in multispectral satellite sensors (to ensure sufficient signal to noise). Lee [42] tested the impact of applying the model on multispectral data on IOP retrievals and found that, for optically deep waters, overestimation of the inverted absorption coefficient can occur when narrow-band hyperspectral models (like SAMBUCA) are applied to broad-band multispectral data. The implications for optically deep, clear waters can be an overestimation of the absorption coefficient at 440 nm (a440nm) of at least 20%. For turbid or complex waters where the a440nm is greater than approximately 0.3 to 1 m, then the band width uncertainties are relatively small (<5%) and therefore Lee believes their use can be justified in these cases [42].
For the results presented here, the SAMBUCA algorithm was employed with limited ground truth and additional information provided for processing. This simulates the real-life situation of an isolated location being mapped. It also accentuates the regions where its performance is lacking, so helping to emphasize the point that the two technologies are complementary.

2.6. Stereo Method

Here we present an overview of the method used for deriving depths from the stereo pair of WV2 images.

2.6.1. Rational Polynomial Coefficients

The use of rational polynomial models in lieu of full orbital camera models is now widespread, and we do not propose to contribute to methods already well described in the literature (see e.g., [43]). Work by Di et al. [44] describes in detail how to use the rational polynomial model supplied by Digital Globe with their imagery to calculate 3d ground positions from matched pixels, by linearizing using Taylor’s theorem. We provide a brief overview below.
The WV2 metadata provides the coefficients of a rational polynomial, R , that maps 3D coordinates on the ground to the row and column of the image provided, i.e.,
R ( x , y , h ) = ( r , c ) ,
where:
  • x is the latitude,
  • y is the longitude,
  • h is the position of the scene point relative to the WGS84 geoid,
  • r is the row number in the image, and
  • c is the column number in the image.
For a given row and column of the image and height, ( r , c , h ) , it is possible to solve for the longitude and latitude, ( x , y ) , of that point on the ground. If we do this for a fixed ( r , c ) and two arbitrary heights, h 0 < h 1 , we find two points ( x 0 , y 0 , h 0 ) and ( x 1 , y 1 , h 1 ) such that the ray between them must pass through the camera centre on the satellite at the time of acquisition. Thus the rational polynomials in the metadata allow every pixel in each image to have a calculable unit vector n ( r , c ) that points to the camera centre. In practice, due to the fact that the satellite is 770 km above the surface of the earth, we have n ( r , c ) n for every ( r , c ) of a given image. For our stereo pair of images, n 1 = ( 0.18 ,   0.60 ,   0.78 ) and n 2 = ( 0.04 ,   0.12 ,   0.99 ) , which allows us to calculate that the camera centre rays are separated by approximately 44.7 degrees and that the plane containing these two normals is inclined approximately 3.6 degrees from vertical.

2.6.2. Accounting for Refraction

It is not within the scope of this paper to provide a detailed analysis of the effect of the air-water refraction interface on the results of the stereo problem; we refer the reader to [24,25], among others, who have observed that in general there is no closed-form solution to the stereo photogrammetry problem in the presence of an air-water interface. However, when the sensor is very high compared to the depth, and in particular when, as here, the incidence angles from the scene are very similar, an approximation is effective [13]. Our case is similar to Figure 3 from [24].
In Figure 4, we present a simplified version of the refraction diagrams available in the literature above and elsewhere. Note that incidence and refracted angles are related according to Snell’s Law, which states that sin Θ i / sin ϕ i = n , where we have assumed a value of n = 1.33 for seawater. For our particular case, the epipolar plane, Λ , is nearly vertical, which means we can ignore the effect of refraction bending this plane along the water line. For the present case, the error caused by this assumption is less than 1%.
We therefore assume a single epipolar direction between the two images of the stereo pair, and correct the apparent height, h ^ , to the correct height, h , according to the formula:
h =   h ^ ( tan Θ 1 + tan Θ 2 tan ϕ 1 + tan ϕ 2 )
Since h ^ is the height below the geoid, and not the actual water depth, this result contains an offset proportional to the difference between these two values (assumed constant over a scene). This is removed by subtracting an offset as explained at the end of the section below.
We also note that the refraction induces a horizontal displacement in observed position of the scene point:
X = X ^ + h ^ ( tan Θ 2 tan ϕ 1 tan Θ 1 tan ϕ 2 tan ϕ 1 + tan ϕ 2 )
In the example here, this change in disparity is less than half a Worldview pixel over the range of heights to be considered so it doesn’t have a significant effect on the positions of the calculated heights.

2.6.3. Determining Corresponding Pixels

The literature on formulating and solving the stereo problem is extensive [45] and current [46]. It is beyond the scope of this paper to review and analyze the relative merits of these many algorithms; we take a pragmatic approach to the correspondence problem as described below.
To determine which pixels correspond in the two images, we look along epipolar lines, with bilinear interpolation used where the observations do not fall onto a whole pixel value. Normalized correlation matching over a rectangular window that is longer over the direction of the epipolar line is used, with disparities chosen that have the maximum normalized correlation score over the window. Fractional disparities (for sub-pixel accuracy) are calculated by fitting a local quadratic in the neighborhood of the maximum.
The size of the matching window is important: A large window is more likely to give a positive match for a particular pixel, but a small window is necessary for a more accurate localization of features. The accuracy for which the maximum value of the correlation function can be solved is related to the features in the matching window. Areas of the seafloor which have relatively uniform cover have fewer features to match and a larger window is needed compared with regions that are more heterogeneous.
The disparity maps shown in this paper are produced using C code with an adaptively-sized matching window. A short segment of epipolar line is interpolated from the image around each pixel of interest. The matching is then performed locally on this short segment. The correlation function is evaluated over a range of window sizes, and the window size that gives the most pronounced peak is chosen.
For the results presented here, each pixel was matched with a window that was 11 pixels across (transverse to epipolar lines) and between 5 and 20 pixels long. The cost function is evaluated over every window size in this range, and the result that has the most sharply defined maximum is used. The idea of having an adaptive window size according to a stereo quality metric is discussed in [47]. The sharpness of the maximum is given by the curvature of the cost function evaluated at the maximum, as estimated by a locally fitted quadratic, which also provides sub-pixel accuracy on the optimization. The curvature of the correlation matching function provides a measure of confidence in the predicted disparity [47,48]. A higher curvature measure implies a better-defined peak on this curve, and thus a better-defined maximum.
For the purpose of this study, we have made the mean of the estimated stereo depths equal to the mean of the LiDAR depths by adding an appropriate offset. This corresponds to the real life situation where several ground truth depths are known and used to calibrate the stereo bathymetry, which is a realistic scenario envisaged by the authors [18,33].
It would be reasonable for the purpose of comparison to also apply this step to the SAMBUCA depths, but we do not do this, because although that improves the average accuracy, it negatively affects the sandy areas where its performance is best. The reason for this is that, for this particular implementation, the algorithm tends to systematically over-estimate the depth of shallow, dark targets by mischaracterizing them as deep, bright targets, while being quite accurate with the sand pixels (the errors for sand are not systematically biased). Therefore, to normalize the SAMBUCA bathymetry a positive depth would have to be added, which would introduce a systematic error to the sand class, so this was not done.
It would also be reasonable to leave the raw depth estimates unchanged in order to perform the experiments documented in this paper. However, this would be inconsistent with the normalization that was performed on the stereo data. It is well documented ([40,49]) that areas of low albedo (deep/dark targets) pose a problem for the accuracy of SAMBUCA and other radiative-transfer methods. We have therefore chosen to normalize the output to the sand class only (as described in Section 2.3), so that the mean depth of the sand sites is equal for the LiDAR bathymetry and the SAMBUCA bathymetry. It is reasonable to assume that a selection of sand sites would be known a priori. This somewhat exaggerates the results and conclusions in the paper. However, we have performed the experiments with and without the normalization, and the fundamental conclusions do not change.

3. Results

In this section, we examine the behavior of the errors of the stereo-derived bathymetry and the SAMBUCA-derived bathymetry. First we show the results of the classification, see Figure 5a.
The two substrata with bottom reflectance ρ 1 and ρ 2 , employed by the model to produce these results, are detritus (seagrass wrack) [50] for the dark reflectance and sand [51] for the bright reflectance. The classified map of Figure 5a can be compared to the proportions, q , from Equation (1) of dark substrate with reflectance ρ 1 , see Figure 5b. The maps have similar characteristics, which is to be expected.
For the results of the depth estimation, Figure 6a shows the ground truth LiDAR depths, Figure 6b shows the SAMBUCA depths, while Figure 6c shows the stereo depth estimates. Note that in the deep water to the west of the image, there are relatively few points for the stereo algorithm to match, resulting in patches of low accuracy.
To compare the distributions of accurately-estimated depth pixels throughout the image, Figure 7 shows the pixels that have been estimated within 1 m for each of the two methods. The accurate stereo method pixels are coloured green, while the accurate SAMBUCA pixels are coloured red. The areas in these images where both of the methods are accurate appear in yellow (where the red and green pixels are the same). Note that the majority of the accurate pixels are either green or red, and there are only small areas where both methods are accurate. The results showing the total percentage (%) of accurate pixels for each algorithm, and both, are shown in Table 2. This shows that the methods have the potential to provide improved depth estimates in different areas of the image. The whole-of-scene errors are summarized in the final column of Table 3.
To further demonstrate that the two methods are complementary, in the following sections, the errors for each method are compared according to three criteria: Depth; image texture (measured by local standard deviation); and seafloor cover type.

3.1. Effect of Depth

In this section, we examine the accuracy of the two methods with respect to the depth of water that they are estimating. Figure 8 shows density plots of the absolute accuracy of each pixel against the depth of water that they lie in. The density plots are produced using R’s scatterplot function, which sums 2D Gaussians centred at points according to the absolute error and depth of each pixel. High concentrations of pixels are indicated by red hues, while the blue hues indicate low densities of points. These can be thought of as 3D histograms, shaded according to point density. As an alternative to these plots, we also provide equivalent plots where the estimated depths are binned according to ground truth depths (per [52], Figure 4, Figure 5, Figure 6 and Figure 7). The binned versions are the standard method of presenting such data in bathymetric analysis, although in our view the density plots, which are standard throughout the rest of the literature, provide just as much insight into the error structure and also have the advantage of being built in to many data analysis software packages.
Figure 8a shows the results for the stereo method. The accuracy of the majority of the pixels is more-or-less flat with depth, although there is a broad trend of outliers that increases with depth, indicated by the expanding yellow cloud as the depth increases.
The plot of SAMBUCA absolute accuracy against depth in Figure 8b shows that there is a relationship between the depth and absolute accuracy, which is discussed in Section 4.

3.2. Image Texture as a Measure of Confidence

Figure 9 shows plots of the local standard deviation versus absolute accuracy for the stereo bathymetry method and for the SAMBUCA method. We calculated the standard deviation of the pixels in the adaptive stereo matching window described in Section 2.4.
The stereo result indicates that if we place a threshold, ϓ , on the local standard deviation, taking only pixels above it, we would capture many pixels with low absolute error and would have much more confidence in the accuracy of the predicted depths. On the other hand, the SAMBUCA algorithm does not possess this property, and we can see in Figure 9b that a threshold on SD would exclude many accurate pixels.

3.3. The Effect of Seabed Substrate

The final comparison performed here is the accuracy of the predicted depths over the sand and non-sand pixels. To characterize the errors in different substrates, we calculate the mean absolute error, Δ = 1 / N i = 1 N | d i d ^ i | , with d i , the actual LiDAR depth, d ^ i , the estimated depth, and N , the total number of pixels in the scene. Table 3 shows a summary of these results for the 3 classes from the previous section.
In Figure 10a,b, histograms of the frequencies of absolute errors in depth pixels are presented for sand and non-sand classes. Figure 10c shows that, for this case, there are few deep pixels in the non-sand class. The table of mean absolute errors presented in Table 3 also shows this effect; the mean absolute error for the SAMBUCA-predicted depths are around half those of the non-sand pixels. The result for all pixels falls between the results for the sand and non-sand pixels. The result for the mixed pixels reflects the composition of these mixed pixels which are more sand than non-sand.

3.4. Hybrid of Satellite-Based Bathymetry Approaches

To explore the potential improvements in error rates by employing a hybrid of the two methods, we present results based on a pixel-level decision criterion using the above metrics.

3.4.1. Potential Best Achievable Results Using the Most Accurate Pixels

In order to provide a baseline for improvements in error rates using a hybrid technique, we first present the combined depth image where the most accurate pixel is chosen from the two bathymetry models. This can only be done in the presence of complete ground truth, so does not represent a realistic scenario, but it is useful for the purpose of this benchmark comparison.
Let D be the ground truth depth, D S be the stereo depth estimate, D R be the radiative transfer based estimate, and define the best hybrid depth, D H B , estimate to be:
D H B = { D S   if   abs ( D S D ) < abs ( D R D ) D R            otherwise
Figure 11a shows the resulting combined image. The mean absolute error for the combined image is Δ = 1.38 m, which can be compared to the overall individual errors given in Table 3.

3.4.2. Decision Based on Local Texture

Next, we explore the idea of combining the depth estimates based on the local standard deviation of the stereo pair in the matching window. Let σ be the local standard deviation in the matching window from above. For a given ϓ 0 , define the hybrid depth, D H ϓ , estimate to be:
D H ϓ = { D S   if   σ > ϓ D R   if   σ ϓ
In the presence of ground truth, a plot of the mean absolute error of the depths as a function of ϓ can be made, as shown by the black curve in Figure 12a. For small values of ϓ , the stereo estimates are nearly all used, which means that the superior performance of the SAMBUCA in the low-textured areas is not exploited. As ϓ increases, more and more of the low-textured areas are excluded from the combined estimate, which improves the over-all accuracy until ϓ 3.1 , where the mean absolute error is ϓ = 2.36 m, at which point the SAMBUCA estimate starts to be used in areas that have higher texture, decreasing the overall accuracy of the combined result.
In practice the optimal value of ϓ can be estimated from a few ground truth locations, as described below. We note that there is a broad part of the black curve in Figure 12a where significant improvements of the depth estimates are achieved without knowing ϓ particularly accurately. The combined image for the optimal value of ϓ is shown in Figure 11b. It is evident in Figure 11b that the threshold on SD has improved the obvious inaccuracies due to the stereo having poor matching capability in the western half of the image.
To simulate only having limited ground truth knowledge, we calculated ϓ using a random sample of 20 ground truth points. An example plot is shown in red in Figure 12a. In practice these should be targeted at a range of different image textures, which could be identified from the image by eye. However, to recreate the worst-case-scenario, we chose 20 samples from arbitrary areas of the image. We repeated this 1500 times, thus creating the full range of potential ϓ estimates. The histogram of these ϓ estimates is shown in Figure 12b. The histogram shows that the mean estimate ϓ ¯ = 3.45 . To achieve a 10% improvement in the accuracy of the mean absolute error (from 2.89 to 2.59), we would need to estimate 2.2 < ϓ < 6.9 , and this occurs in 80% of the samples that we took. This shows that even with a random sample of ground truth depths, a significant improvement can be made by estimating ϓ in this way. Further experiments are required to show if a nominal value of ϓ 3.1 would be adequate for use on other images, potentially without requiring any ground truth.

3.4.3. Decision Based on Cover Type

In Figure 10, it is evident that the errors in the stereo depths are not significantly different for sand and non-sand pixels, although Table 3 does indicate some small differences. On the other hand, the SAMBUCA estimates are significantly better in the bright, uniform sandy areas of the image. This indicates that a pixel-level decision based on cover-type is worth considering. For a cover-type based hybrid method, we tested two approaches—that all sand pixels should be estimated by the SAMBUCA algorithm, with non-sand and mixed classes estimated with the stereo method, and alternatively that the stereo method should only estimate the non-sand pixels, and the sand and mixed pixels be estimated with SAMBUCA. The best mean absolute error was ϓ = 2.58 m given by using the stereo estimates for all except the sand class. The combined image is shown in Figure 11c.

4. Discussion

The results and analysis presented in Section 3 reveal the improved accuracy potential for a hybrid of two previously known satellite-derived bathymetry methods. The improved accuracy is due to the fact that the areas of the image where the two methods are most effective are broadly disjoint sets, as revealed in Figure 7.

4.1. On the Overall Accuracy of the Methods

Our aim was to characterize the errors of the two methods, so that if a hybrid approach is applied to a given location there is a protocol for deciding which method should be employed to each part of the scene. There is a body of literature on sensitivity and accuracy of (mainly land-based) stereo photogrammetry [29,48,53] and physics-based bathymetry [54,55,56], and these should also form the basis of decisions regarding which to employ in particular regions. The SAMBUCA algorithm has various quality assessment metrics associated with the convergence of the optimization algorithm and its goodness-of-fit [37], which were not employed here but could also be used to aid the pixel-level decision process.
The whole-of-scene errors are summarized in the final column of Table 3, and are comparable to the results of similar methods implemented in the literature [24,27], although we must emphasize here that we are aiming to extract broad conclusions about the behavior of each method, not extract the best results that are possible.
Regarding the accuracy of the SAMBUCA method, see Figure 8a, the concentration of shallow pixels with high error rates (centered around −8 m with absolute errors 8 m) suggests that these pixels have been incorrectly identified as deep, bright targets, the intensity of which has been affected by the water column attenuation, when they are actually shallow dark targets. The concentration of deep and relatively accurate pixels in the −20 m to −18 m depth range is explained by the lack of non-sand pixels in deeper parts of the image (see Figure 10c). Some of the errors may also be due to the dark bottom types that would naturally be darker than the infinitely deep water column [27,49] (which thus appear brighter as the depth increases and the water column begins to dominate the signal).
With regard to the accuracy of the stereo method, we have observed in Figure 8b that there is a broad trend of outliers that increase with depth. This is because the contrast of the bottom features decreases with increasing depth as the water column signal begins to dominate that of the substrate, and this makes it harder for the matching algorithms to accurately localize their positions. In general, we would not expect accuracy to depend on distance for terrestrial stereo [20], because the variations in depth are usually small compared to the parameters of the stereo system (baseline, overall distance from the scene), so it is only the blurring due to the water column that causes this.
The extensive deep and sandy areas in the south west of the scene are not well matched in the stereo algorithm, and this is apparent in Figure 6c (the mottled texture), Figure 7 (the absence of green coloring in this region), and the broad band of yellow points in the −20 m to −17 m region of Figure 8a.

4.2. Combining Satellite-Based Bathymetry Approaches

Section 3.1 shows that the potential benefits of using a hybrid satellite bathymetry approach are considerable. Carl and Miller [21] have applied a combined method over the Caspian sea, using an unspecified satellite sensor with a resolution of 4 m; they claim <2 m vertical accuracy, which is consistent with the results presented here, although their technique is not yet published in the literature and they have not presented an analysis of the errors to our knowledge.
We demonstrated that the best improvement that can be achieved with any hybrid approach for these data is a reduction of mean absolute error from 2.88 m (stereo) to 3.21 m (SAMBUCA) to 1.38 m, which is less than 50% of either one, though this is not a realistic approach given the dependence on ground data. The key to approaching these accuracies that are achievable when the most-accurate pixel is chosen in reality is to attempt to predict the errors without knowing the ground truth. We have attempted to characterize the errors as well as possible for each of the two methods.
For stereo methods, the presence of features, which can be seen as measure of image texture [14], allows successful matching to be performed, which is one of the key factors in predicting the accuracy of the stereo solution [20]. In recent work, Jalobeanu and Goncalves [57] explain that it is not possible to predict confidence intervals on disparity maps a priori (i.e., given just the image data metadata without ground truth), chiefly because of matching and modelling errors in the stereo recovery process. None-the-less, several predictors of accuracy have been discussed [20,58], with [47] also investigating several measures of confidence for predicting errors in stereo vision. We explored the idea of using the standard deviation of the matching window as a measure of confidence in the depth predicted from photogrammetric methods.
The standard deviation is a measure of the local texture of the imagery, and hence is an indicator of how well the stereo algorithm can match features between the two images of the stereo pair. Thus, in the density plot of Figure 9a, we can see that as the standard deviation increases, the absolute error decreases for the stereo method.
For radiative transfer methods, there are many studies of the effect of seafloor cover on the retrieval accuracies. For example, Miecznik and Grabowska [27] experiment with simulated and real data and find that seagrass results have larger retrieval errors compared to sand and that the accuracy for sand substrates at 20 m depth are similar to those for darker substrates at 10 m depths.
The absolute errors for SAMBUCA pixels for the sand and non-sand classes have distinctly different distributions, with the sand pixels mostly falling below 3 m of absolute error. This is consistent with the work of Burns et al. [49], using the numerical simulator HYDROLIGHT to investigate the sensitivity of the ocean color equations, who find that the largest dependence is the size of the difference in albedo between the water column and the substrate, implying that brighter substrates will have increased sensitivity, and thus accuracy of the retrieved depth.
While we can use depth to separate the scene into components where the stereo and SAMBUCA estimates have relatively more accuracy, there are no systematic trends apparent in this example which allow general statements to be made. In addition to this, we would require the depth in the first place, which would imply that some sort of bootstrapping algorithm would be required.

5. Conclusions

This paper presents the results of analyses that demonstrate the complementary properties of the radiative-transfer-based bathymetry technique SAMBUCA and classical photogrammetric techniques adapted and applied to bathymetric problems.
There are substantial theoretical improvements possible by combining these two techniques. Using the most accurate pixels from each of the two methods provides an improvement of 51% for the SAMBUCA method and 53% for the stereo method.
Each of the methods performs better for different sea bed types observed in a satellite image. The SAMBUCA algorithm is best suited to highly reflective, uniform areas where the bottom substrate is bright compared to the water column. Sand is an example of such a substrate, and has the advantage of being extremely common in the marine environment. For the scene examined in this paper, there were many deep (>15 m) sandy areas and SAMBUCA performed better in these deeper areas. Radiative transfer algorithms will often fail when the amount of light from the bottom falls below a certain level and the signal from the water column begins to dominate. Thus, deep targets can cause problems, but shallow dark targets can also be mistaken for deep dark ones when there are limited spectral bands to distinguish them.
Stereo algorithms are at their most effective when there are highly contrasted and distinct features in the image, which allows correlation matching to uniquely match features between the two images in the stereo pair. In uniform, or repeating, areas of the image, there can be no features to find, or the features can be falsely matched to other similar features. Thus, the accuracy of the stereo technique over a sandy substrate will often be limited.
We showed that by doing a classification of the imagery used to provide the depth estimates, we could choose to use SAMBUCA for the sand areas, and the stereo algorithm for the non-sand and mixed areas, giving an improvement in mean absolute error of 12% over stereo alone, or 9% for the SAMBUCA algorithm alone.
When the local-neighborhood standard deviation is used to decide whether to use stereo or SAMBUCA, we observed an improvement in mean absolute error of 20% for the stereo algorithm alone and 17% over just using the SAMBUCA algorithm. By improving these methods and identifying other ways of predicting the errors, we would hope to approach the theoretical “best” method in future research. An alternative strategy, also worth considering, is to use the stereo methods to constrain the SAMBUCA algorithm on dark targets.

Author Contributions

Conceptualization, N.C. and S.C.; Formal analysis, S.C.; Methodology, S.C. and N.C.; Software, S.C., E.J.B. and J.A.; Validation, S.C.; Visualization, S.C.; Writing—original draft, S.C.; Writing—review & editing, S.C., N.C. and J.A.

Funding

This research received no external funding.

Acknowledgments

This work was internally funded by the CSIRO Oceans and Atmosphere Business Unit and CSIRO Data 61.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Collings, S.; Campbell, N.A.; Keesing, J. Quantifying the discriminatory power of remote sensing technologies for benthic habitat mapping. Int. J. Remote Sens. 2018. accepted. [Google Scholar]
  2. Symonds, G.; Black, K.P.; Young, I.R. Wave-driven flow over shallow reefs. J. Geophys. Res. 1995, 100, 2639–2648. [Google Scholar] [CrossRef]
  3. Brando, V.; Decker, A. Satellite hyperspectral remote sensing for estimating estuarine and coastal water quality. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1378–1387. [Google Scholar] [CrossRef]
  4. Pacheco, A.; Horta, J.; Loureiro, C.; Ferreira, O. Retrieval of nearshore bathymetry from Landsat 8 images: A tool for coastal monitoring in shallow waters. Remote Sens. Environ. 2015, 159, 102–116. [Google Scholar] [CrossRef]
  5. Lyzenga, D.R. Passive remote sensing techniques for mapping water depth and bottom features. Appl. Opt. 1978, 17, 379–383. [Google Scholar] [CrossRef] [PubMed]
  6. Martin-Lauzer, F.-R. Imagery-Derived Bathymetry Validated. Available online: http://www.hydro-international.com/issues/articles/id1454-imageryderived__Bathymetry_Validated.html (accessed on 1 June 2013).
  7. McConchie, R.F. Great Barrier Reef in 3D. ABC. Available online: http://www.abc.net.au/news/rural/2013-11-21/great-barrier-reef-map/5108374 (accessed on 9 May 2016).
  8. International Hydrography Organisation. Satellite Derived Bathymetry (Paper for Consideration by CSPCWG). Available online: http://www.iho.int/mtg_docs/com_wg/CSPCWG/CSPCWG11-NCWG1/CSPCWG11-08.7A-Satellite%20Bathymetry.pdf (accessed on 15 July 2015).
  9. DigitalGlobe. Worldview-2 Data Sheet. 2009. Available online: https://dg-cms-uploads-production.s3.amazonaws.com/uploads/document/file/98/WorldView2-DS-WV2-rev2.pdf (accessed on 7 August 2018).
  10. Markham, B.; Storey, J.; Morfitt, R. Landsat-8 sensor characterization and calibration. Remote Sens. Environ. 2015, 7, 2279–2282. [Google Scholar] [CrossRef]
  11. European Space Agency. The Operational Copernicus Optical High Resolution Land Mission. Available online: http://esamultimedia.esa.int/docs/S2-Data_Sheet.pdf (accessed on 13 February 2015).
  12. Tewinkel, G.C. Water depths from aerial photographs. Photogramm. Eng. 1963, 29, 1037–1042. [Google Scholar]
  13. Westaway, R.M.; Lane, S.N.; Hicks, M. Remote sensing of clear-water, shallow, gravel-bed rivers using digital photogrammetry. Photogramm. Eng. Remote Sens. 2001, 67, 1271–1281. [Google Scholar]
  14. Feurer, D.; Bailly, J.S.; Puech, C.; Le Coarer, Y.; Viau, A. Very-high-resolution mapping of river-immersed topography by remote sensing. Prog. Phys. Geogr. 2008, 32, 403–419. [Google Scholar] [CrossRef]
  15. Javernicka, L.; Brasingtonb, J.; Carusoa, B. Modeling the topography of shallow braided rivers using Structure-from-Motion photogrammetry. Geomorphology 2014, 213, 166–182. [Google Scholar] [CrossRef]
  16. Salomonson, V.; Abrams, M.J.; Kahle, A.; Barnes, W.; Xiong, X.; Yamaguchi, Y. Evolution of NASA’s Earth observation system and development of the Moderate-Resolution Imaging Spectroradiometer and the Advanced Spaceborne Thermal Emission and Reflectance Radiometer instruments. In Land Remote Sensing and Global Environmental Change; Ramachandran, B., Justice, C.O., Abrams, M.J., Eds.; NASA’s Earth Observing System and the Science of ASTER and MODIS; Springer: New York, NY, USA, 2010; pp. 3–34. [Google Scholar]
  17. Zhang, H.-G.; Yang, K.; Lou, X.; Li, D.; Shi, A.; Fu, B. Bathymetric mapping of submarine sand waves using multiangle sun glitter imagery: A case of the Taiwan Banks with ASTER stereo imagery. J. Appl. Remote Sens. 2015, 9, 9–13. [Google Scholar] [CrossRef]
  18. Stumpf, R.P.; Holderied, K.; Sinclair, M. Determination of water depth with high-resolution satellite imagery over variable bottom types. Limnol. Oceanogr. 2003, 48, 547–556. [Google Scholar] [CrossRef] [Green Version]
  19. Hobi, M.L.; Ginzler, C. Accuracy assessment of digital surface models based on WorldView-2 and ADS80 stereo remote sensing data. Sensors 2012, 2, 6347–6368. [Google Scholar] [CrossRef] [PubMed]
  20. Davis, C.H.; Jiang, H.; Wang, X. Modeling and estimation of the spatial variation of elevation error in high resolution DEMs from stereo-image processing. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2483–2489. [Google Scholar] [CrossRef]
  21. Carl, S.; Miller, D. GAF’s Innovative Stereo Approach. 2014. Available online: https://www.gaf.de/sites/default/files/PR_GAF_RWE_Bathymetry.pdf (accessed on 7 August 2018).
  22. Botha, E.J.; Brando, V.; Dekker, A.J. Effects of per-pixel variability on uncertainties in bathymetric retrievals from high-resolution satellite images. Remote Sens. 2016, 8, 459. [Google Scholar] [CrossRef]
  23. Aguilar, M.A.; Saldaña, M.; Aguilar, F.J. Generation and quality assessment of stereo-extracted DSM from GeoEye-1 and WorldView-2 imagery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 1259–1271. [Google Scholar] [CrossRef]
  24. Murase, T.; Tanaka, M.; Tani, T.; Miyashita, Y.; Ohkawa, N.; Ishiguro, S.; Suzuki, Y.; Kayanne, H.; Yamano, H. A photogrammetric correction procedure for light refraction effects at a two-medium boundary. Photogramm. Eng. Remote Sens. 2008, 9, 1129–1136. [Google Scholar] [CrossRef]
  25. Fryer, J.G. Photogrammetry through shallow water. Aust. J. Geod. 1983, 38, 25–38. [Google Scholar]
  26. Hedley, J.D.; Harborne, A.R.; Mumby, P.J. Simple and robust removal of sun glint for mapping shallow-water benthos. Int. J. Remote Sens. 2005, 26, 2107–2112. [Google Scholar] [CrossRef]
  27. Miecznik, G.; Grabowksa, D. WorldView-2 bathymetric capabilities. Int. Soc. Opt. Photonics 2012. [Google Scholar] [CrossRef]
  28. Parker, H.; Sinclair, M. The successful application of airborne LiDAR bathymetry surveys using latest technology. In Proceedings of the 2012 Oceans—Yeosu, Yeosu, Korea, 21–24 May 2012. [Google Scholar]
  29. International Hydrographic Organsiation (IHO). IHO Standards for Hydrographic Surveys; IHO: La Condamine, Monaco, 2008. [Google Scholar]
  30. Campbell, N.A.; Atchley, W.R. The geometry of canonical variate analysis. Syst. Zool. 1981, 30, 268–280. [Google Scholar] [CrossRef]
  31. Chia, J.; Caccetta, P.A.; Furby, S.L.; Wallace, J.F. Derivation of plantation type maps. In Proceedings of the 13th Australasian Remote Sensing and Photogrammetry Conference, Canberra, Australia, 21–24 November 2006. [Google Scholar]
  32. O’Neill, N.T.; Gauthier, Y.; Lambert, E.; Hubert, L.; Dubois, J.M.M.; Dubois, H.R.E. Imaging spectrometry applied to the remote sensing of submerged seaweed. Spectr. Signat. Objects Remote Sens. 1988, 287, 315. [Google Scholar]
  33. Lyzenga, D.R. Remote sensing of bottom reflectance and water attenuation parameters in shallow water using aircraft and Landsat data. Int. J. Remote Sens. 1981, 2, 71–82. [Google Scholar] [CrossRef]
  34. Deidda, M.; Sanna, G. Bathymetric extraction using Worldview-2 high resolution images. Int. Soc. Photogramm. Remote Sens. 2012, XXXIX-B8, 153–157. [Google Scholar] [CrossRef]
  35. Lee, Z.; Carder, K.L.; Mobley, C.D.; Steward, R.G.; Patch, J.S. Hyperspectral remote sensing for shallow waters: 2. Deriving bottom depths and water properties by optimization. Appl. Opt. 1999, 38, 3831–3843. [Google Scholar] [CrossRef] [PubMed]
  36. Wettle, M.; Brando, V.E. SAMBUCA: Semi-Analytical Model for Bathymetry, Un-Mixing, and Concentration Assessment; CSIRO Land and Water Science Report. 2006. Available online: www.clw.csiro.au/publications/science/2006/sr22-06.pdf (accessed on 7 August 2018).
  37. Brando, V.; Anstee, J.M.; Wettle, M.; Dekker, A.G.; Phinn, S.R.; Roelfsema, C. A physics based retrieval and quality assessment of bathymetry from suboptimal hyperspectral data. Remote Sens. Environ. 2009, 113, 755–770. [Google Scholar] [CrossRef]
  38. Hedley, J.D.; Mumby, P.J. A remote sensing method for resolving depth and subpixel composition of aquatic benthos. Limnol. Oceanogr. 2003, 48, 480–488. [Google Scholar] [CrossRef] [Green Version]
  39. Lee, Z.; Kendall, L.C.; Chen, R.F.; Peacock, T.G. Properties of the water column and bottom derived from Airborne Visible Imaging Spectrometer (AVIRIS) data. J. Geophys. Res. Oceans 2001, 106, 11639–11651. [Google Scholar] [CrossRef]
  40. Maritorena, S.; Morel, A.; Gentili, B. Diffuse reflectance of oceanic shallow waters: Influence of water depth and bottom albedo. Limnol. Oceanogr. 1994, 39, 1689–1703. [Google Scholar] [CrossRef] [Green Version]
  41. Lee, Z.; Weidemann, A.; Arnone, R. Combined effect of reduced band number and increased bandwidth on shallow water remote sensing: The case of WorldView 2. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2577–2586. [Google Scholar] [CrossRef]
  42. Dowman, I.; Dolloff, J.T. An evaluation of rational functions for photogrammetric restitution. Int. Arch. Photogramm. Remote Sens. 2000, 33, 254–266. [Google Scholar]
  43. Lee, Z.P. Applying narrowbands remote-sensing reflectance models to wideband data. Appl. Opt. 2009, 48, 3177–3183. [Google Scholar] [CrossRef] [PubMed]
  44. Di, K.; Ma, R.; Li, R. Deriving 3D shorelines from high resolution IKONOS satellite images with rational functions. In Proceedings of the 2001 ASPRS Annual Convention, St. Louis, MO, USA, 23–27 April 2001. [Google Scholar]
  45. Scharstein, D.; Szeliski, R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 2002, 47, 7–42. [Google Scholar] [CrossRef]
  46. Yang, Q. Stereo matching using tree filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 834–846. [Google Scholar] [CrossRef] [PubMed]
  47. Hu, X.; Mordohai, P. A quantitative evaluation of confidence measures for stereo vision. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2121–2133. [Google Scholar] [PubMed]
  48. Engal, G.; Mintz, M.; Wildes, R.P. A stereo confidence metric using single view imagery with comparison to five alternative approaches. Image Vis. Comput. 2004, 22, 943–957. [Google Scholar]
  49. Burns, B.A.; Taylor, J.R.; Sidhu, H. Uncertainties in bathymetric retrievals. In Proceedings of the 17th National Conference of the Australian Meteorological and Oceanographic Society (IOP Publishing), Canberra, Australia, 27–29 January 2010. [Google Scholar]
  50. Anstee, J.M.; Botha, E.J.; Williams, R.J.; Dekker, A.G.; Brando, V.E. Optimizing classification accuracy of estuarine macrophytes: By combining spatial and physics-based image analysis. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Honolulu, HI, USA, 25–30 July 2010; pp. 1367–1370. [Google Scholar]
  51. Botha, E.J.; Brando, V.E.; Dekker, A.G.; Anstee, J.M.; Sagar, S. Increased spectral resolution enhances coral detection under varying water conditions. Remote Sens. Environ. 2013, 131, 247–261. [Google Scholar] [CrossRef]
  52. Lyzenga, D.R.; Malinas, N.P.; Tanis, F.J. Multispectral bathymetry using a simple physically based algorithm. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2251–2259. [Google Scholar] [CrossRef]
  53. Jalobeanu, A. Predicting spatial uncertainties in stereo photogrammetry: Achievements and intrinsic limitations. In Proceedings of the 7th International Symposium on Spatial Data Quality, Coimbra, Portugal, 12–14 October 2011. [Google Scholar]
  54. Lee, Z.; Arnone, R.; Hu, C.; Werdell, J.; Lubac, B. Uncertainties of optical parameters and their propagations in an analytical ocean color inversion algorithm. Appl. Opt. 2010, 49, 369–381. [Google Scholar] [CrossRef] [PubMed]
  55. Wang, P.; Boss, E.S.; Roeslar, C. Uncertainties of inherent optical properties obtained from semianalytical inversions of ocean color. Appl. Opt. 2005, 44, 4047–4085. [Google Scholar] [CrossRef]
  56. Sagar, S.; Brando, V.E.; Sambridge, M. Noise estimation of remote sensing reflectance using a segmentation approach suitable for optically shallow waters. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7504–7512. [Google Scholar] [CrossRef]
  57. Jalobeanu, A.; Goncalves, G. The unknown spatial quality of dense point clouds derived from stereo images. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1013–1017. [Google Scholar] [CrossRef]
  58. Alharthy, A. A consistency test between predicted and actual accuracy of photogrammetry measurements. In Proceedings of the American Society for Photogrammetry and Remote Sensing Annual Conference, Baltimore, MD, USA, 7–11 March 2005. [Google Scholar]
Figure 1. (a) An overview of the study area, with inset showing location in Australia. The city of Perth, Western Australia is in the centre. The red rectangle shows the extents of the image in (b), which shows Hillarys Marina in the centre; Marmion Marine Park is to the left. The red rectangle shows the study area.
Figure 1. (a) An overview of the study area, with inset showing location in Australia. The city of Perth, Western Australia is in the centre. The red rectangle shows the extents of the image in (b), which shows Hillarys Marina in the centre; Marmion Marine Park is to the left. The red rectangle shows the study area.
Remotesensing 10 01247 g001
Figure 2. Input data for the study area: (a,b) December 2009, Worldview 2 (WV2) (blue band) deglinted stereo pair. (c) June 2012 WV2 multispectral image (RGB).
Figure 2. Input data for the study area: (a,b) December 2009, Worldview 2 (WV2) (blue band) deglinted stereo pair. (c) June 2012 WV2 multispectral image (RGB).
Remotesensing 10 01247 g002
Figure 3. Example video frames from the survey, showing typical water clarity and variety of cover types in the scene. Positions are shown in lat.-long. × 1000 at top of frames. (a) Reef image. (b) Sand image.
Figure 3. Example video frames from the survey, showing typical water clarity and variety of cover types in the scene. Positions are shown in lat.-long. × 1000 at top of frames. (a) Reef image. (b) Sand image.
Remotesensing 10 01247 g003
Figure 4. The geometry of the refraction correction.
Figure 4. The geometry of the refraction correction.
Remotesensing 10 01247 g004
Figure 5. (a) Classified map, showing non-sand (NS), sand (S), and mixed (M) pixels. (b) A map of the SAMBUCA q values from Equation (1), showing the estimated proportion of the dark substrate, ρ 1 .
Figure 5. (a) Classified map, showing non-sand (NS), sand (S), and mixed (M) pixels. (b) A map of the SAMBUCA q values from Equation (1), showing the estimated proportion of the dark substrate, ρ 1 .
Remotesensing 10 01247 g005
Figure 6. Three bathymetry images, with the same depth scale. (a) LiDAR depth, (b) depth estimates from SAMBUCA, and (c) depth estimates from stereo.
Figure 6. Three bathymetry images, with the same depth scale. (a) LiDAR depth, (b) depth estimates from SAMBUCA, and (c) depth estimates from stereo.
Remotesensing 10 01247 g006
Figure 7. Using the stereo method, green pixels are where predicted depth is within 1 m of true depth. Using the SAMBUCA method, red pixels are where predicted depths are within 1 m of true depth. When both are within 1 m, the pixels are yellow.
Figure 7. Using the stereo method, green pixels are where predicted depth is within 1 m of true depth. Using the SAMBUCA method, red pixels are where predicted depths are within 1 m of true depth. When both are within 1 m, the pixels are yellow.
Remotesensing 10 01247 g007
Figure 8. (a) Plots of depth vs. stereo estimates; left: Kernel density plot and right: Density plot binned by ground truth depth. (b) Plots of depth vs. SAMBUCA estimates; left: Kernel density plot and right: Density plot binned by ground truth depth.
Figure 8. (a) Plots of depth vs. stereo estimates; left: Kernel density plot and right: Density plot binned by ground truth depth. (b) Plots of depth vs. SAMBUCA estimates; left: Kernel density plot and right: Density plot binned by ground truth depth.
Remotesensing 10 01247 g008
Figure 9. Density plots of (a) stereo accuracy versus standard deviation and (b) SAMBUCA accuracy versus standard deviation.
Figure 9. Density plots of (a) stereo accuracy versus standard deviation and (b) SAMBUCA accuracy versus standard deviation.
Remotesensing 10 01247 g009
Figure 10. (a) Distributions of absolute errors for different classes for SAMBUCA. (b) Distributions of absolute errors for different classes for stereo. (c) Depth distribution for sand and non-sand pixels.
Figure 10. (a) Distributions of absolute errors for different classes for SAMBUCA. (b) Distributions of absolute errors for different classes for stereo. (c) Depth distribution for sand and non-sand pixels.
Remotesensing 10 01247 g010
Figure 11. (a) The hybrid image based on choosing the better of the two methods for each pixel. (b) The hybrid image based on thresholding the standard deviation of the matching window. (c) The hybrid image based on the substrate type.
Figure 11. (a) The hybrid image based on choosing the better of the two methods for each pixel. (b) The hybrid image based on thresholding the standard deviation of the matching window. (c) The hybrid image based on the substrate type.
Remotesensing 10 01247 g011
Figure 12. (a) Plot of the mean absolute errors for the combined bathymetry vs. the SD threshold, all depth values (black trace) with a typical plot of the mean absolute errors for the combined bathymetry vs. the SD threshold using 20 random values (red trace). (b) Histogram of 1,500 estimates of ϓ from 20 samples of ground truth depth.
Figure 12. (a) Plot of the mean absolute errors for the combined bathymetry vs. the SD threshold, all depth values (black trace) with a typical plot of the mean absolute errors for the combined bathymetry vs. the SD threshold using 20 random values (red trace). (b) Histogram of 1,500 estimates of ϓ from 20 samples of ground truth depth.
Remotesensing 10 01247 g012
Table 1. Specifications for Fugro LADS Mark II device.
Table 1. Specifications for Fugro LADS Mark II device.
ParameterSpecification
LiDAR SystemFugro LADS Mark II
IMUGEC-Marconi FIN3110
PlatformDeHavilland Dash-8
Height365–670 m
LaserNd:Yag
Operating Frequency900 Hz
Nominal Point Spacing4.5 m
Table 2. Total % of accurate pixels for each algorithm, and % of pixels for which both algorithms are accurate.
Table 2. Total % of accurate pixels for each algorithm, and % of pixels for which both algorithms are accurate.
Estimates with <1 m of Error% of Total Pixels
SAMBUCA25.3%
Stereo38.4%
Both8.96%
Table 3. Mean absolute error, Δ , for sand, non-sand, mixed, and all classes for the two methods.
Table 3. Mean absolute error, Δ , for sand, non-sand, mixed, and all classes for the two methods.
SandNon-SandMixedAll
Stereo3.052.852.952.94
SAMBUCA2.193.442.712.84

Share and Cite

MDPI and ACS Style

Collings, S.; Botha, E.J.; Anstee, J.; Campbell, N. Depth from Satellite Images: Depth Retrieval Using a Stereo and Radiative Transfer-Based Hybrid Method. Remote Sens. 2018, 10, 1247. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10081247

AMA Style

Collings S, Botha EJ, Anstee J, Campbell N. Depth from Satellite Images: Depth Retrieval Using a Stereo and Radiative Transfer-Based Hybrid Method. Remote Sensing. 2018; 10(8):1247. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10081247

Chicago/Turabian Style

Collings, Simon, Elizabeth J. Botha, Janet Anstee, and Norm Campbell. 2018. "Depth from Satellite Images: Depth Retrieval Using a Stereo and Radiative Transfer-Based Hybrid Method" Remote Sensing 10, no. 8: 1247. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10081247

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop