Next Article in Journal
Temporal Interpolation of Satellite-Derived Leaf Area Index Time Series by Introducing Spatial-Temporal Constraints for Heterogeneous Grasslands
Previous Article in Journal
A Satellite-Based Assessment of the Distribution and Biomass of Submerged Aquatic Vegetation in the Optically Shallow Basin of Lake Biwa
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Weight Design Approach for the Geometrically-Constrained Matching of Satellite Stereo Images

School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, 1439957131 Tehran, Iran
*
Author to whom correspondence should be addressed.
Submission received: 13 June 2017 / Revised: 12 August 2017 / Accepted: 13 September 2017 / Published: 18 September 2017
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
This study presents an optimal weighting approach for combined image matching of high-resolution satellite stereo images (HRSI). When the rational polynomial coefficients (RPCs) for a pair of stereo images are available, some geometric constraints can be combined in image matching equations. Combining least squares image matching (LSM) equations with geometric constraints equations necessitates determining the appropriate weights for different types of observations. The common terms between the two sets of equations are the image coordinates of the corresponding points in the search image. Considering the fact that the RPCs of a stereo pair are produced in compliance with the coplanarity condition, geometric constraints are expected to play an important role in the image matching process. In this study, in order to control the impacts of the imposed constraint, optimal weights for observations were assigned through equalizing their average redundancy numbers. For a detailed assessment of the proposed approach, a pair of CARTOSAT-1 sub-images, along with their precise RPCs, were used. On top of obtaining different matching results, the dimension of the error ellipses of the intersection points in the object space were compared. It was shown through analysis that the geometric mean of the semi-minor and semi-major axis by our method was reduced 0.17 times relative to the unit weighting approach.

Graphical Abstract

1. Introduction

Satellite image matching is an essential stage in the production of photogrammetric products, such as various large scale maps and digital surface models (DSMs). Image matching is defined as finding correspondences between two or more images in which, after determination of primitives, a similarity criteria evaluated between those and the corresponding features is detected.
There are different categorizations for image matching techniques. Some techniques are designed specifically for stereo images and some for multiple view images [1]. Methods of stereo image matching are divided into two groups, namely, global and local methods [2]. The local or window-based methods work on local windows in each point of the stereo images, while the global methods generate a depth map from entire images through defining and optimizing an energy function [1]. The global methods demonstrate better performance in comparison with local methods in dense matching, but their computational complexity is higher [3]. Moreover, the semi-global methods have reduced computational complexity through the introduction of some simplifications in optimization algorithms of the global methods, such as semi-global matching (SGM) [4], tSGM [3], and SGM-Nets [5].
Local methods are divided into area-based matching (ABM) and feature-based matching (FBM). In ABM methods, for a specific point in the first image, only by relying on the grey level pattern in a neighborhood around this point, the search is performed to find the same pattern in the second image. Cross-correlation and least-squares matching (LSM) are the most commonly used ABM techniques. FBM methods perform image matching based on the extraction of features from two images. Then a correspondence is established between the extracted features regarding their similar attributes. Scale-Invariant Feature Transform (SIFT) [6], and Speeded-up Robust Features (SURF) [7] techniques are among the known examples of the FBM method [8]. Additionally, wavelet-based methods help to detect features in the scale space [9,10]. The precision of the FBM methods are limited, similar to the global methods. Therefore, the results of these methods are often used as seed points for precise matching methods [11]. In addition, the final density of the matching results depends entirely on the success of the feature extraction step.
In this paper, the focus is on the definition of an image matching method for satellite stereo images with high precision and reliability which is capable of generating dense matching results. The least squares matching as an ABM method has the potential to achieve high precision [8,12], and its mathematical model allows evaluation of the quality of the results. However, it requires seed points within a small pull-in range and it may converge to a wrong point in regions with poor texture, even with appropriate seed point [1]. The latter properties reduce the reliability of the matching technique. Adding geometric constraints could potentially help refine to achieve more reliable matching results [13]. An advantageous characteristic of the LSM technique is its flexibility in incorporating geometric constraint equations into matching equations In this regard, the LSM can be combined with the geometric constraint [14,15], or separately, used to improve the precision of provided seed points [16].
The epipolar geometry constraint is one of these constraints that can be defined using orientation information (in the form of rigorous or rational polynomial camera (RPC) model) of satellite stereo imagery. The well-known epipolar line pairs of stereo images acquired by frame cameras, can be locally assumed for matching of satellite linear array stereo images [16]. Even though the epipolar geometry in linear array images is not defined as a straight line [17], the approximate epipolar lines can be defined using the Shuttle Radar Topography Mission (SRTM) elevation model and the RPC model [18]. The epipolar constraint is capable of increasing the convergence radius and rate of the matching [12], reducing the number of false matches [19], and significantly reducing the dimensions of the search space [20].
Here, in the image matching algorithm, the RPC models are used to provide the seed points in the coarse matching step and to restrict the search space in the form of a geometric constraint. As a result, the reliability of the matching is increased by employing the known orientation parameters of the stereo images. Additionally, a new weighting approach is proposed in this study in order to combine the RPC intersection of corresponding rays as the geometric constraint to the LSM method. The primary prerequisite of this combination is the assignment of the appropriate weight for different types of observations. It is expected that better results could be achieved if observations from different types have equal accuracy and reliability. Improving the definition of the weight matrices could create these better results. In this context, an optimal weighting technique for the second-order design (SoD) of geodetic networks [21] has been proposed. Since, different types of observation are involved in the SoD stage (angles and distances), they have proposed assigning a weight to observations in a way that the same average redundancy numbers are obtained for all types of observations.
In this manuscript, with the aim of increasing the reliability of satellite image matching, we decided to utilize this technique for combining geometric constraint equations and least-squares image matching equations. This means that the point coordinates (geometric nature) observations of the geometric constraint were combined with the grey level (radiometric nature) observations of LSM in an optimal manner. In comparison with the unit weighting, the proposed method can significantly improve the precision of the space intersection of the corresponding rays. Additionally, this method, as a purely statistical technique, was compared with a conventional weighting method that uses the radiometric content of images.

2. Geometrically-Constrained LSM

As stated in the introduction section, the main problem facing this study is finding an accurate corresponding point on the second image given a fixed point on the first image. In most precise satellite image matching methods the LSM technique is used. An image matching method, guided by the object space, is used to reduce the inherent ambiguity in this problem. The LSM technique has the ability to combine with a large variety of geometric constraints. Some of these constraints must first be linearized and organized in the least squares framework and then added to the LSM equations. Here, relying on the known sensor orientation parameters, the space intersection equations are used as a geometric constraint. The framework of constrained image matching strategy is illustrated in Figure 1.
The space intersection of corresponding rays is written based on the stereo viewing geometry. Here, the point in the first image and the initial position of the point in the second image are known. Each point presents two observations in which their weight should be assigned in a way to achieve appropriate redundancy numbers relative to the average redundancy number of the LSM observations. A higher weight leads to a smaller redundancy number, which means reduced reliability and freedom of the observation, and vice versa.

2.1. Least Squares Image Matching (LSM)

The LSM equation, based on an affine geometric transformation and a linear (drift and offset) radiometric transformation, was formed as follows:
f ( x , y ) τ ( x , y ) = T 1 ( g ( x , y ) ) ,
where f is the grey value of the template window which is formed centered on the point in the first image, g is the grey value of the search window which is formed centered on the seed point in the second image, τ is the true error function, ( x , y ) are the pixel coordinates of the point in the template window, and ( x , y ) are the pixel coordinates of corresponding point in the search window. Additionally, T 1 is the radiometric transformation, and T 2 and T 3 are 2D affine transformations as follows:
T 1 ( g ( x , y ) ) = r 1 + r 2 . g ( x , y ) ,
x = T 2 ( x , y ) = a 1 + a 2 x + a 3 y ,   y = T 3 ( x , y ) = b 1 + b 2 x + b 3 y
where a i and b i are affine transformation parameters, r 1 and r 2 are the parameters of the linear radiometric transformation. Substituting T 1 ( g ( x , y ) ) from Equation (2) and x , y from Equation (3) within Equation (1), the following equation yields:
f ( x , y ) τ ( x , y ) = r 1 + r 2 . g ( a 1 + a 2 x + a 3 y ,   b 1 + b 2 x + b 3 y )
By expanding Equation (4), the grey level matching equations written for each pixel pair in two homologous windows are given by:
( f ( x 1 , y 1 ) g 0 ( x 1 , y 1 ) f ( x 2 , y 2 ) g 0 ( x 2 , y 2 ) f ( x n , y n ) g 0 ( x n , y n ) ) = ( g x 1 g x 1 x 1 g x 1 y 1 g y 1 g y 1 x 1 g y 1 y 1 1 g 0 ( x 1 , y 1 ) g x 2 g x 2 x 2 g x 2 y 2 g y 2 g y 2 x 2 g y 2 y 2 1 g 0 ( x 2 , y 2 ) g x n g x n x n g x n y n g y n g y n x n g y n y n 1 g 0 ( x n , y n ) ) ( d a 1 d a 2 d a 3 d b 1 d b 2 d b 3 d r 1 d r 2 )
where g x i and g y i are the grey level derivatives of the i -th pixel in two directions, and g 0 is the grey value of the search window in each iteration, which must be interpolated from the search image. By adding three zero columns to the end of the design matrix in Equation (5), we reach the following expression which readily combines the constraint equations:
l M + ε M = A M x
where l M are the grey level differences between pixels of the two image windows, ε M is the residual vector, and A M is the design matrix of LSM with the added zero columns. The vector of unknown parameters of the combined system can be written as:
x = [ d a 1 , d a 2 , d a 3 , d b 1 , d b 2 , d b 3 , d r 1 , d r 2 , d φ , d λ , d h ] t
where ( d φ , d λ , d h ) are the differentials of geodetic coordinates of points that are added to the LSM unknown vector from the geometric constraint (Section 2.2). As can be seen from vector x , the geodetic coordinates of the intersection point, along with resulted error ellipsoid of each point, could be estimated during the matching process. The geodetic systems, especially WGS84 (World Geodetic System 1984), establish a geocentric terrestrial reference system. In this system, each point on the Earth’s surface is defined using geodetic latitude φ , geodetic longitude λ , and ellipsoid height h [22].
In general, Equation (6), without adding any constraint equations, can be used only to improve the precision of the initial corresponding (seed) points, provided that the coarse matching step was performed with high reliability, e.g., using manual matching [23].

2.2. Geometric Constraint Based on the RPC Space Intersection

Space intersection equations for satellite stereo images are generally formed based on the RPC model. The forward rational functions are as follows [24,25]:
L i n = P i 1 ( φ n , λ n , h n ) P i 2 ( φ n , λ n , h n ) ,   S i n = P i 3 ( φ n , λ n , h n ) P i 4 ( φ n , λ n , h n )
where ( L i n ,   S i n ) are the normalized line and sample coordinates of a point in the i -th image, ( ρ n , λ n , h n ) are the normalized geodetic coordinates of the point in ground space, and P 1 P 4 are the third-order polynomials related to the i th image. The coefficients of these polynomials are calculated by the image provider and are included in the RPC files of stereo images.
Each corresponding points extracted from stereo images are related to a specific point in the ground space. The coordinates of the ground point are the common parameters between these equations. Thus, the space intersection consists of four equations where, somehow, the two relations of Equation (8) are written for each of these corresponding points, individually. The equations of the RPC space intersection are written based on the following expressions:
L i = F L i ( φ , λ , h ) = L i n . L s c i + L o f i S i = F S i ( φ , λ , h ) = S i n . S s c i + S o f i
where F L i and F S i are the rational polynomial functions defined in Equation (8), L s c i and S s c i denote the scale values for the two image point coordinates, and L o f i and S o f i are the offset values for the image point coordinates. Similarly, the normalization values (scale and offset) for the geodetic coordinates of the ground point will be used in the linearized model in Equation (10). These normalization values are provided in the RPC files of stereo images.
The linearized model of space intersection equations for two corresponding rays can be written in the matrix form as:
( F L 1 L 1 F S 1 S 1 F L 2 L 2 F S 2 S 2 ) = ( 0 0 𝜕 F L 1 𝜕 φ 𝜕 F L 1 𝜕 λ 𝜕 F L 1 𝜕 h 0 0 𝜕 F S 1 𝜕 φ 𝜕 F S 1 𝜕 λ 𝜕 F S 1 𝜕 h 1 0 𝜕 F L 2 𝜕 φ 𝜕 F L 2 𝜕 λ 𝜕 F L 2 𝜕 h 0 1 𝜕 F S 2 𝜕 φ 𝜕 F S 2 𝜕 λ 𝜕 F S 2 𝜕 h ) ( d L 2 d S 2 d φ d λ d h )
where ( L 1 , S 1 ) are the center pixel coordinates of the template window in the first image, and ( L 2 , S 2 ) are the center pixel coordinates of the moving search window in the second image. It should be noted that all coordinates in Equation (10) are denormalized. The relationship between the vector of unknown parameters and observations in the geometric constraint, according to Equation (10), can be established:
l G + ε G = A G x
where l G is the observation equations of geometric constraint, ε G is the residual vector, and A G is the design matrix of geometric constraint with added zero columns.

2.3. Combined System of Constrained Matching Equations

While LSM uses all the pixels in the template window, the geometric constraint is only written for a point of this template window and is usually the center point. Therefore, selecting the appropriate image coordinate system for combining the equations is important. The rational functions for each known object point can provide the coordinate values of the point in the pixel coordinate system, which is defined for the image. On the other hand, LSM equations are defined for each matching point in a way that its origin coincides at the matching point position in the search window. These two coordinate systems only differ in the location of the origin of the two systems (Figure 2).
The translation parameters of the corresponding point in the search image are the common terms between the LSM and the geometric constraint equations. These parameters are independent of the origin of the coordinate system:
d x =   d L 2 ,   d y =   d S 2
According to Equations (5) and (10), and also common unknown parameters between the LSM equations and the geometric constraint equations, are as follows:
d a 1 =   d L 2 ,   d b 1 =   d S 2
Each observation set with its assigned weight matrix should be joined into the equation system. The unknown parameters of the equation system are estimated based on the least-squares method:
x ^ = ( A M t W M A M + A G t W G A G ) 1 ( A M t W M l M + A G t W G l G ) ,
where W M and W G are the weight matrices of LSM and geometric constraint observation equations, respectively. These two sets of equations have different types of observations and, as a result, the definition of the weight matrices is a critical step.

2.3.1. Proposed Weighting Method

Assigning the appropriate weights should lead to achieving the following goals: the point in the first image must be kept fixed during the matching iterations; the corresponding point must have sufficient freedom to satisfy geometric constraints; and the precision of estimated unknown parameters should be improved.

Application of the Redundancy Matrix

One way of determining the effectiveness of different types of observations in the final results is to compare the redundancy numbers of these observations in the redundancy (reliability) matrix. The redundancy matrix relates the observations vector to the estimated residual vector, which is expressed using the following equation:
ε ^ = l A x ^ = ( I ( A T W A ) 1 A T W ) l = R l ,
The geometric interpretation of this matrix in the least-squares approximation is given in Figure 3. The least squares estimation x ^ projects the observation vector l onto the subspace M which is spanned by the columns of the matrix A [26]. In this way, the redundancy matrix R , is a projection matrix which projects the observations vector to the orthogonal complement of the subspace M.
The r m n elements of the redundancy matrix indicate the effectiveness of the observation m to the estimated residual value of observation n . All elements of this matrix are between zero and one. Redundancy numbers are the diagonal elements r n n of the redundancy matrix which can be used as a reliability measure so that large values of them (close to one) provide the possibility to discover the gross errors [27].

Weight Design Using the Redundancy Matrix

In the proposed approach, the weight of the constraint observations was considered to be relatively small, since the constraints can only be applied to one pixel, while all of the pixels in the image window were involved in the least squares matching equations.
Assuming unit weights for LSM observations, the weights of constraint observations are assigned as follows:
  • Weight of S 2 : By assigning a unit weight, the redundancy number of this observation is about half of the average redundancy number of the LSM observations (see Section 3.4). Thus, we must provide more freedom for this observation in order to direct the LSM window toward a more precise position. Therefore, a small weight should be assigned for this component (relative to the unit weights of the LSM observations).
  • Weight of L 2 : The initial value of this component is captured from the coarse matching step. At this step, the elevation model of the study area as well as the RPCs of the image pair have been utilized, which minimize the along-track error [28]. Thus, the initial value of this component is very close to the desired final value and therefore, its weight should be large enough relative to the weight of S 2 . This condition is met automatically by maintaining the unit weight of this component.
  • Weight of L 1 and S 1 : Owing to the presence of ε G in Equation (11), the estimated values of L 1 ,   S 1 would be different from their initial values but it contradicts main goals of this research where the point in the first image assumed fixed during the image matching procedure. The common approach to deal with this issue is raising weight of the observations. However, after updating the weight of the S 2 observation, the obtained redundancy numbers for these two observations were reduced to approximately zero. Thus, similar to what was considered for the L 2 component, there was no need for an additional scale factor to fix the position of the point in the first image.
According to the above discussion and taking inspiration from the work of [21], we decided to divide the observations into two groups. One of the constraint equations formed a single-member group and the other three constraint equations, along with the LSM equations (1225 equations assuming a 35 × 35 window), formed another group. The member of the single-member group is a point coordinate, and most of the members of the other group are grey levels. In other words, these two groups have different type of observations.
We tried to equalize the redundancy numbers of these two groups, which led to higher freedom for S 2 . By analyzing the diagonal values of the redundancy matrix, an additive scaling factor for the single-member group was calculated. In this manner, the weight of each group of observations was assigned properly for achieving better matching results.
Thus, the matrix form of the equations in Equations (6) and (11) should be rewritten as follows:
l ˜ 1 + ε ˜ 1 = B 1 x , l ˜ 2 + ε ˜ 2 = B 2 x ,
where the indices denote the group number, l ˜ 1 includes l 1 and the first three elements of the l 2 vector, l ˜ 2 is a single-element vector, B 1 , B 2 are the new design matrices, and ε ˜ 1 , ε ˜ 2 denote the new residual vectors after grouping. The proposed weighting approach was formulated on the basis of two general criteria:
  • The average redundancy number of two groups of observations must be equalized (uniform redundancy).
  • For higher reliability, the average redundancy number of each group should be as close as possible to one (high redundancy).
In order to satisfy simultaneously both criteria, the average redundancy number of the two groups must be equal to the fixed value of the average redundancy number of all observations:
r ¯ i = r ¯
where i = 1 ,   2 is the index of groups, r ¯ i = t r ( R i ) / n i , r ¯ = t r ( R ) / n and n denotes the number of observations. In this situation, each group will have the largest number of its redundancy number and, at the same time, the uniformity of the redundancy numbers will be maintained. Combining these equations yields:
t r ( R i ) = n i r ¯ i = ν i i = n i ν n
where ν is the degree of freedom of the system and the sum of diagonal elements of the redundancy matrix. As a result, we assigned unit weights for the first group and then an optimal technique for estimating the weight of the second group was adopted. Considering the observations were divided into two groups, the equality in Equation (18) depends on the weight of the second group of observations, which is obtained by estimating the scale factor K for this weight matrix. This scale factor applied only to the second group, which is related to the S 2 observation:
t r ( R 2 ) = n 2 t r ( B 2 ( B t W K B ) B 2 t × K × W 2 ) = n 2 n ν
where W 1 and W 2 are the weight matrices of the two groups of observations and B , W are the design matrix and weight matrix related to the system of equations. Using the rule t r ( U V ) = t r ( V U ) :
t r ( K × B 2 t W 2 B 2 ( B t W K B ) ) + n 2 u n = 0 ,
where W K is the modified weight matrix of the system of equations:
W K = W + ( K 1 ) × P ,
P = [ 0 0 0 0 0 0 0 0 1 ] n × n ,
Substituting W K from Equation (21) within Equation (20) and using the normal equation notation gives:
t r ( K × N 2 ( N + ( K 1 ) × N 2 ) 1 ) + n 2 u n = 0
Referring to [21] after expanding Equation (23) by Taylor series, a quadratic equation yields:
a k 2 + b k + c = 0
where a = t r ( N 2 N 1 N 2 N 1 ) , b = [ t r ( N 2 N 1 N 2 N 1 ) + t r ( N 2 N 1 ) ] , and c = n 2 u n t r ( N 2 N 1 ) . Between the two roots of Equation (24), one with a minus sign for the square root of the discriminant expression is accepted. Solving for k was continued until it converged to one. Practically, this convergence occurs in less than 10 iterations. Multiplying the outputs from all iterations gives the desired scaling factor K :
K = j k j
where j is the iteration index. A scheme of the proposed weighting and unit weighting approaches is presented in Figure 4. In each case, the combined system was solved assuming unit weights for all of observations and then using the Helmert VCE method [29,30] the variance factors of the two observation sets were calculated. Again, the system was solved, this time with the updated weights applying the estimated variance factors.

2.3.2. Precision Estimation

After calculating least-squares residuals, the a posteriori variance factor is given by:
σ ^ 0 2 = ε ^ 1 t W M ε ^ 1 + ε ^ 2 t W G K ε ^ 2 v
where W G K is the modified weight matrix of the geometric constraint observations, as seen in Equation (28). At the beginning, due to the lack of preliminary information, unit weights are often assumed for all observations. After estimating the variance components, the scaling factors for each weight matrix were obtained and then the image matching procedure should be repeated based on the new set of weights. As a result, by applying the estimated scale factors α 1 and α 2 , the final form of the covariance matrix of the combined system is as follows:
C l = [ α 1 σ 0 , 1 2 W M 1 0 0 α 2 σ 0 , 2 2 W G 1 ]
After estimating the coefficients α 1 and α 2 , constrained matching was repeated with the new weights. In the proposed approach before performing variance estimation, the scaling factor K was introduced to the weight matrix of the geometric constraint as the following equation:
W G K = W G + ( K 1 ) × [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ] 4 × 4
Considering the change in one of the weight matrices, a new estimate of the covariance matrix of the observations will be obtained, which should be close to the previous covariance matrix. This covariance matrix can be estimated as follows:
C l K = [ β 1 σ 0 , 1 2 W M 1 0 0 β 2 σ 0 , 2 2 ( W G K ) 1 ]
where β 1 and β 2 are the estimated scale factors using the Helmert VCE method. Then, the covariance matrix of unknown parameters yields:
C ^ x ^ K = C ^ δ ^ K = σ ^ 0 2 ( A t C l K A ) 1

2.4. Discussion on the Execution Speed

The LSM technique is usually employed to increase the precision of the matching results. However, the nonlinearity of the mathematical model and the use of a neighborhood around each point directly affect the processing time of the image matching step. This mathematical model is linearized around an approximate position of the seed point and is solved iteratively. In each iteration, after the geometric transformation, a new window should be interpolated from the second image. The larger the size of the window, the longer the interpolation process takes.
Here, the coarse matching method performs a 1D search and calculates the correlation coefficient pixel by pixel. Due to the fact that the search is performed in the line direction of the image, the search interval length depends on the elevation range of the study area. For CARTOSAT-1 stereo images, according to the spatial resolution of these images, a one meter increment in the elevation range appears as four pixels in the line direction which should be added to the search interval length.
The presence of the constraint equations do not have a significant effect on the execution speed, because the number of constraint equations is much lower than the number of the LSM equations. Additionally, these equations are written based on rational functions which operate only with the coordinates of the points and do not require the complicated process, e.g., grey level interpolation.
In summary, the desired precision of the image matching, the size of the matching window, the quality of the image texture in the neighborhood of the seed point, the elevation variations of the study area, as well as the spatial resolution of the stereo images, are effective at the convergence speed of the utilized image matching algorithm.

3. Experimental Results

In order to test the proposed method, 16 points on a regular grid in the first image were considered. The image matching step, through the unit weighting and proposed weighting methods, was implemented. Relying on the redundancy matrix, we controlled the amount of displacement of the points on both images during image matching iterations. Estimated residual vectors of the observations demonstrated the success rate of the proposed method. Additionally, the performance of the method was assessed through analysis of the error ellipses in the object space.

3.1. Study Area and Data

The experimental data used in this study is a part of the ISPRS benchmark dataset which consists of a pair of CARTOSAT-1 sub-images from the LaMola region of Catalonia, Spain, captured on 5 February 2008, which was supplied in Orthokit format (Figure 5) [31]. Additionally, the improved RPCs of the images are also provided.
We have chosen this dataset because the image matching strategy which was adopted requires precise RPCs of the stereo image. Additionally, the one-arc-second SRTM file of the study area was available from the USGS website [32]. Some specifications of the dataset and test area are presented in Table 1.

3.2. Coarse Matching Step

The RPC model can be used for finding the approximate corresponding point. By transferring the point on the first image to multiple levels of elevation in the object space, and then transferring all points of intersection to the second image, the search space for matching is limited to a straight line.
Using RPCs of the image pair, at first, each point, were transferred to the SRTM elevation model. Then, the point of intersection were projected to the second image, which will be the center of the 1D search space, due to the low resolution of the SRTM elevation model. The search was performed along the line direction (along-track direction) of the second image to find the initial corresponding point. In each position of the search space, a normalized cross-correlation (NCC) value was calculated and the position defined by the maximal NCC coefficient was then introduced as the seed point to the precise matching step.

3.3. Precise Matching Step

In order to extract highly-accurate corresponding points between the stereo images, strict thresholds must be applied, especially to the bidirectional matching shift ( d a 1 and d b 1 ). Although, the value of 0.1 pixels is sufficient for the CARTOSAT-1 stereo images [33], in this experiment the value of 0.05 pixels is selected for the shift threshold. Moreover, in order to avoid false matching, the number of iterations should be limited to 20 and the normalized correlation coefficient of two windows in the last iteration should be greater than 0.8.

Optimum Weight Estimation

The iterative solution of k j coefficients for a sample point is given in Table 2. The product of the values in all iterations gives the final value of this scaling factor, K = 0.0113 . Considering that an almost identical value was obtained in the other adopted points all over the test area, this value was used in order to update the weight matrix of the second group of observations.
For a comparison with other methods, scaling factors in all points were computed using Zhang’s method [15], which was originally proposed by the authors of [34]:
K = 1 m × n ( g x 2 + g y 2 )
where K is equal to the average of the squared grey level derivatives in the m –by– n image window. The computed scale factors are given in Table 3. As we have seen, the scaling factors for the weight matrix are not much different from the results of the proposed approach. The differences are due to the impact of the radiometric content in the image windows which are not considered in our approach. It should be noted that in this method both points were displaced and, hence, the scaling factors were applied to all observations in the geometric constraint equations.

3.4. Analysis of the Results

In order to evaluate the effect of the proposed approach on the results of adjustment, three points from among the 16 points were selected. Results of the statistical information on the redundancy numbers of all the observations collected in both cases is presented in Table 4 and Table 5.
The matching window with a size 35 × 35 pixels, introduces 1225 grey level observations (one observation per pixel, which is described in Equation (5)) to the system of equations. By adding four observations from the geometric constraint, the number of observations reaches 1229, whereas the number of the unknown parameters is 11 (Equation (7)). Therefore, the degree of freedom and average redundancy number of the combined system will be v = 1229 11 = 1218 , and r ¯ = 1218 / 1229 = 0.9910 , respectively.
As is clear in Table 4 and Table 5 in the proposed approach relative to the unit weighting approach, the redundancy number of observation S 2 is much closer to the average redundancy of the system, marked in bold.
In both cases, after completing the least squares adjustment, the covariance matrix of the unknown parameters was calculated. The estimated standard deviations in the coordinates of the intersection point were calculated from covariance matrix elements. These standard deviations present error estimates in the reference axes directions. Then, the estimated error can be shown through the orientation and lengths of the semi-axes of the standard (68% confidence level) error ellipse [35]. Therefore, in order to complete a comparison, the planimetric error ellipses depicted around the estimated intersection points resulting from the two cases are shown in Figure 6. For more confidence, all of the error ellipses were plotted at the 95% confidence level.
For a better understanding of the dimensions of the error ellipses, which were obtained in arc-degrees, they have been converted to lengths on the reference ellipsoid in meters as follows [36]:
d X = N c o s ( φ ) d λ ,   d Y = M d φ
where M = a ( 1 e 2 ) ( 1 e 2 sin 2 φ )   3 and N = a ( 1 e 2 sin 2 φ ) , a and e are the semi-major axes and eccentricity of WGS84 ellipsoids, respectively. Additionally, d X and d Y denote the dimensions of the error ellipses in meters. The values of d λ and d φ are estimated in degrees. Therefore, this effect should be taken into account in the estimated covariance matrix of the unknowns, with the aim of acquiring error ellipsoid dimensions in meters.
As can be seen in Figure 6, the size of error ellipses is not uniform, even if only the unit weighting approach is considered. In order to clarify the main reason of this discrepancy, we should consider the texture of the template window (Figure 7). The area-based matching methods have the potential to achieve highly-precise results if the template window is well-textured. The uniqueness of each template window in the search area can be evaluated using an autocorrelation function [37]. The existence of the sharp autocorrelation peak at the center of the window is essential for the good performance of the local (intensity-based) image matching techniques.
Error ellipsoids are the general way to illustrate the confidence region of estimated parameters in 3D. However, in our computed points, one of the axes of the error ellipsoids was almost vertical, implying the correlation between vertical and horizontal components is small and insignificant. For this reason, we treated them as independent components. Hence, the confidence regions of the vertical and horizontal components are presented individually. The error ellipses for horizontal components are shown in Figure 6 and the intervals for vertical components are shown in Figure 8.
In each of the 16 points, the intersection position of corresponding rays (as the center of the error ellipse) has been changed after applying the proposed weighting approach relative to the unit weighting case. However, in order to make a true comparison, the error ellipses resulting from two weighting approaches are illustrated as concentric ellipses in Figure 9.
The length of the error ellipses axes were significantly decreased at all 16 points. Table 6 shows their numerical values. As can be seen, in our proposed method the shapes of the error ellipses are almost circular (smaller eccentricity) and this implies that the variances of the errors are distributed isotropically on the horizontal plane.

4. Discussion

An analysis of the results and comparison with the unit weighting method shows the success of the proposed approach in achieving the predetermined goals. In the Figure 10, the computed residuals for the observations of the image points are illustrated. As can be seen in Figure 10, using the proposed method, residuals of the first image observations are close to zero. As expected, by assigning small values to the redundancy number of the given point observations in the first image makes it possible to significantly fix the position of this point during the image matching iterations. Additionally, by increasing the redundancy number of the sample component in the second image, the residual of this observation in all points increased. In other words, the point in the second image was moved toward the epipolar plane with more freedom.
It is remarkable that two observations of four intersection observations, which are related to coordinates of a corresponding point on the search image, take new values in each iteration. This is because of the influence of some unknown LSM parameters on these two observations. After computing the differential values of affine transformation model shift parameters, they are used to update the corresponding point coordinates. This is the reason that applying a scale factor to the weight of a corresponding point’s sample component leads to a change on the image matching result and improvement on the intersection precision. It is clear that if these observations were given constant values, adding a scale factor would have no effect on the improvement of unknown precision estimation.

5. Conclusions

A more reliable image matching strategy leads to a more accurate scene reconstruction, which is usually achieved by introducing geometric constraints. Before adding any geometric equations to the radiometric equations of LSM, the optimal weights of different types of observations should be determined. Referring to a previous study that used the redundancy matrix to determine the optimal weights in second-order design of geodetic networks, in this study, the weights were determined from the redundancy matrix analysis. The conventional application of the redundancy matrix was the internal reliability test after performing the least squares technique.
Assuming the unit weight for all observations and the estimate of the resulting redundancy matrix, we decided to update the weight of just one observation ( S 2 ). In this way, the redundancy number of this observation increased about two times and equalized with the average redundancy number of the other elements of the observation vector. The higher the redundancy number, on the one hand, increases the internal reliability, and on the other hand, improves the flexibility of the geometric constraint to improve the image matching performance. It should be noted that in the image matching procedure, the coordinates of the point in the second image were treated as an observation of the geometric constraint and as an unknown parameter of the LSM.
The proposed weighting approach introduces some changes in the estimated values of unknown parameters and also in the estimated covariance matrix. As a result, the size of the planimetric error ellipses of the intersection points in the object space were decreased significantly. In this regard, the geometric mean of two axes of these ellipses by the proposed weighting was reduced 0.17 times relative to the unit weighting approach.
A comparison with a similar approach was performed and the same results were obtained using the two approaches in half of the points. However, different scale factors were achieved in other points due to neglecting the grey level of pixels within the matching window in the proposed method. In future research, the proposed method, which is based only on statistical analysis, can be combined with radiometric content of the matching windows.

Acknowledgments

The authors would like to thank Dr. Ali Azizi for his scientific support and helpful suggestions. Special thanks are given to the data providers, namely: Euromap for the Cartosat-1 dataset and ICC Catalonia for the LiDAR data.

Author Contributions

M.A.S. proposed the weighting approach; Hamed Afsharnia designed and implemented the image matching strategy and prepared the manuscript; Hossein Arefi supervised the whole research process. All authors have approved the results and revised the final manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Remondino, F.; Spera, M.G.; Nocerino, E.; Menna, F.; Nex, F.; Gonizzi-Barsanti, S. Dense image matching: Comparisons and analyses. In Proceedings of the Digital Heritage International Congress, Marseille, France, 28 October–1 November 2013; pp. 47–54. [Google Scholar]
  2. Szeliski, R. Computer Vision: Algorithms and Applications, 1st ed.; Springer: London, UK, 2011; ISBN 978-1-84882-935-0. [Google Scholar]
  3. Yan, L.; Fei, L.; Chen, C.; Ye, Z.; Zhu, R. A multi-view dense image matching method for high-resolution aerial imagery based on a graph network. Remote Sens. 2016, 8, 799. [Google Scholar] [CrossRef]
  4. Hirschmuller, H. Accurate and efficient stereo processing by semi-global matching and mutual information. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; pp. 807–814. [Google Scholar]
  5. Seki, A.; Pollefeys, M. Sgm-nets: Semi-global matching with neural networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  6. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 1150–1157. [Google Scholar]
  7. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (surf). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  8. Long, T.; Jiao, W.; He, G.; Zhang, Z. A fast and reliable matching method for automated georeferencing of remotely-sensed imagery. Remote Sens. 2016, 8, 56. [Google Scholar] [CrossRef]
  9. Zavorin, I.; Le Moigne, J. Use of multiresolution wavelet feature pyramids for automatic registration of multisensor imagery. IEEE Trans. Image Process. 2005, 14, 770–782. [Google Scholar] [CrossRef] [PubMed]
  10. Murphy, J.M.; Le Moigne, J.; Harding, D.J. Automatic image registration of multimodal remotely sensed data with global shearlet features. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1685–1704. [Google Scholar] [CrossRef]
  11. Silveira, M.; Feitosa, R.; Jacobsen, K.; Brito, J.; Heckel, Y. A hybrid method for stereo image matching. In Proceedings of the XXI Congress: The International Society for Photogrammetric and Remote Sensing, Beijing, China, 3–11 July 2008; pp. 895–900. [Google Scholar]
  12. Gruen, A. Development and status of image matching in photogrammetry. Photogramm. Rec. 2012, 27, 36–57. [Google Scholar] [CrossRef]
  13. Kim, T. A study on the epipolarity of linear pushbroom images. Photogramm. Eng. Remote Sens. 2000, 66, 961–966. [Google Scholar]
  14. Gruen, A. Adaptive least squares correlation: A powerful image matching technique. South Afr. J. Photogramm. Remote Sens. Cartogr. 1985, 14, 175–187. [Google Scholar]
  15. Zhang, L. Automatic Digital Surfece Model (DSM) Generation from Linear Array Images. Ph.D. Thesis, Institute of Geodesy and Photogrammetry, ETH Zurich, Zurich, Switzerland, 2005. [Google Scholar]
  16. Sohn, H.G.; Park, C.H.; Chang, H. Rational function model-based image matching for digital elevation models. Photogramm. Rec. 2005, 20, 366–383. [Google Scholar] [CrossRef]
  17. Duan, Y.; Huang, X.; Xiong, J.; Zhang, Y.; Wang, B. A combined image matching method for chinese optical satellite imagery. Int. J. Digit. Earth 2016, 9, 851–872. [Google Scholar] [CrossRef]
  18. Ling, X.; Zhang, Y.; Xiong, J.; Huang, X.; Chen, Z. An image matching algorithm integrating global srtm and image segmentation for multi-source satellite imagery. Remote Sens. 2016, 8, 672. [Google Scholar] [CrossRef]
  19. Remondino, F.; Spera, M.G.; Nocerino, E.; Menna, F.; Nex, F. State of the art in high density image matching. Photogramm. Rec. 2014, 29, 144–166. [Google Scholar] [CrossRef]
  20. Chai, J.; Ma, S. An evolutionary framework for stereo correspondence. In Proceedings of the Fourteenth International Conference on Pattern Recognition, Brisbane, Australia, 16–20 August 1998; pp. 841–844. [Google Scholar]
  21. Amiri-Simkooei, A.; Sharifi, M.A. Approach for equivalent accuracy design of different types of observations. J. Surv. Eng. 2004, 130, 1–5. [Google Scholar] [CrossRef]
  22. Seeber, G. Satellite Geodesy: Foundations, Methods, and Applications; Walter de Gruyter: Berlin, Germany, 2003. [Google Scholar]
  23. Afsharnia, H.; Azizi, A.; Arefi, H. Accuracy improvement by the least squares image matching evaluated on the cartosat-1. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 11. [Google Scholar] [CrossRef]
  24. Tao, C.V.; Hu, Y. A comprehensive study of the rational function model for photogrammetric processing. Photogramm. Eng. Remote Sens. 2001, 67, 1347–1358. [Google Scholar]
  25. Fraser, C.S.; Hanley, H.B. Bias compensation in rational functions for ikonos satellite imagery. Photogramm. Eng. Remote Sens. 2003, 69, 53–57. [Google Scholar] [CrossRef]
  26. Sneeuw, N.; Krumm, F.; Roth, M. Adjustment Theory; University Stuttgart: Stuttgart, Germany, 2008. [Google Scholar]
  27. Cothren, J.D. Reliability in Constrained Gauss-Markov Models: An Analytical and Differential Approach with Applications in Photogrammetry. Ph.D. Thesis, The Ohio State University, Columbus, OH, USA, 2005. [Google Scholar]
  28. Lutes, J. Photogrammetric Processing of Cartosat-1 Stereo Imagery; Available online: www.eotec.com/images/LutesCartosat_JACIE2006.pdf (accessed on 13 June 2017).
  29. Helmert, F.R. Die Ausgleichungsrechnung Nach der Methode der Kleinsten Quadrate: Mit Anwendungen auf die Geodȧsie, die Physik und die Theorie der Messinstrumente, 2nd ed.; BG Teubner: Leipzig, Germany, 1907. [Google Scholar]
  30. Gao, Z.; Shen, W.; Zhang, H.; Ge, M.; Niu, X. Application of helmert variance component based adaptive kalman filter in multi-gnss ppp/ins tightly coupled integration. Remote Sens. 2016, 8, 553. Available online: https://0-www-mdpi-com.brum.beds.ac.uk/2072-4292/8/7/553 (accessed on 9 May 2017). [CrossRef]
  31. Reinartz, P.; d’Angelo, P.; Krauß, T.; Poli, D.; Jacobsen, K.; Buyuksalih, G. Benchmarking and quality analysis of dem generated from high and very high resolution optical stereo satellite data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38. [Google Scholar] [CrossRef]
  32. NASA, JPL. Nasa Shuttle Radar Topography Mission Global 1 Arc Second. Available online: https://lpdaac.Usgs.Gov/dataset_discovery/measures/measures_products_table/srtmgl1_v003 (accessed on 28 May 2016).
  33. D’Angelo, P.; Lehner, M.; Krauss, T.; Hoja, D.; Reinartz, P. Towards automated dem generation from high resolution stereo satellite images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. Int. Soc. Photogramm. Remote Sens. 2008, 37, 1137–1342. [Google Scholar]
  34. Li, H. Semi-Automatic Road Extraction from Satellite and Aerial Images. Ph.D. Thesis, Institute of Geodesy and Photogrammetry, ETH Zurich, Zurich, Switzerland, 1997. [Google Scholar]
  35. Ghilani, C.D. Adjustment Computations: Spatial Data Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  36. Thomas, C.M.; Featherstone, W.E. Validation of vincenty’s formulas for the geodesic using a new fourth-order extension of kivioja’s formula. J. Surv. Eng. 2005, 131, 20–26. [Google Scholar] [CrossRef]
  37. Potucková, M. Image Matching and Its Applications in Photogrammetry. Ph.D. Thesis, Aalborg University, Aalborg, Denmark, 2004. [Google Scholar]
Figure 1. The framework of the image matching strategy.
Figure 1. The framework of the image matching strategy.
Remotesensing 09 00965 g001
Figure 2. Relationship between two parallel coordinate systems in the combined system. The L S pixel coordinate system is a 2D Cartesian system and its origin coincides with the top-left corner of the original (satellite) image. The L - and S -axis are taken along the image rows and image columns, respectively. The x y pixel coordinate system is a 2D Cartesian system and its origin coincides with the center of the image window. Each image window has its own coordinate system which is used in forming the LSM equations.
Figure 2. Relationship between two parallel coordinate systems in the combined system. The L S pixel coordinate system is a 2D Cartesian system and its origin coincides with the top-left corner of the original (satellite) image. The L - and S -axis are taken along the image rows and image columns, respectively. The x y pixel coordinate system is a 2D Cartesian system and its origin coincides with the center of the image window. Each image window has its own coordinate system which is used in forming the LSM equations.
Remotesensing 09 00965 g002
Figure 3. Geometric interpretation of the redundancy matrix. The magnitude of residual vector ( ε ^ ) which comes from the least squares solution, depends on the redundancy matrix ( R ) that operates on the observation vector ( l ) .
Figure 3. Geometric interpretation of the redundancy matrix. The magnitude of residual vector ( ε ^ ) which comes from the least squares solution, depends on the redundancy matrix ( R ) that operates on the observation vector ( l ) .
Remotesensing 09 00965 g003
Figure 4. Flowchart of the weight design approaches. The middle part is common between the two approaches (GC: geometric constraint).
Figure 4. Flowchart of the weight design approaches. The middle part is common between the two approaches (GC: geometric constraint).
Remotesensing 09 00965 g004
Figure 5. CARTOSAT-1 stereo pair from the LaMola region; (a) forward image (along-track look angle: +26°) which is used as the second image and, (b) afterward image which is used as the first image (along-track look angle: −5°), and (c) the location of the study area (red hatched rectangle) in Catalonia, Spain.
Figure 5. CARTOSAT-1 stereo pair from the LaMola region; (a) forward image (along-track look angle: +26°) which is used as the second image and, (b) afterward image which is used as the first image (along-track look angle: −5°), and (c) the location of the study area (red hatched rectangle) in Catalonia, Spain.
Remotesensing 09 00965 g005
Figure 6. Comparison of the shape of the error ellipses for 16 intersection points resulting from the two approaches. The filled ellipses have been acquired by the proposed approach. The confidence level is 95%. See Table 6 for detailed properties of the error ellipses.
Figure 6. Comparison of the shape of the error ellipses for 16 intersection points resulting from the two approaches. The filled ellipses have been acquired by the proposed approach. The confidence level is 95%. See Table 6 for detailed properties of the error ellipses.
Remotesensing 09 00965 g006
Figure 7. Application of the autocorrelation function for evaluating the suitability of a template window for area-based image matching. The autocorrelation surfaces are presented at four points of 16 points, with two points having the smallest error ellipses (points 8 and 10) and two points with the largest error ellipses (points 7 and 12). The resulting autocorrelation surfaces (3D view and contour plot) show (a) a single sharp peak at point 8, (b) a single sharp peak at point 10, (c) a single weak peak at point 7, and (d) a smooth single peak at point 12, which cannot guarantee a unique corresponding point in the second image. The peaks at the three points (8, 10, and 12) are shown by the dashed red marks.
Figure 7. Application of the autocorrelation function for evaluating the suitability of a template window for area-based image matching. The autocorrelation surfaces are presented at four points of 16 points, with two points having the smallest error ellipses (points 8 and 10) and two points with the largest error ellipses (points 7 and 12). The resulting autocorrelation surfaces (3D view and contour plot) show (a) a single sharp peak at point 8, (b) a single sharp peak at point 10, (c) a single weak peak at point 7, and (d) a smooth single peak at point 12, which cannot guarantee a unique corresponding point in the second image. The peaks at the three points (8, 10, and 12) are shown by the dashed red marks.
Remotesensing 09 00965 g007
Figure 8. Comparison of the intervals of the vertical error component for 16 intersection points obtained from the two approaches: (a) unit weighting, and (b) proposed weighting. The confidence level is 95%.
Figure 8. Comparison of the intervals of the vertical error component for 16 intersection points obtained from the two approaches: (a) unit weighting, and (b) proposed weighting. The confidence level is 95%.
Remotesensing 09 00965 g008
Figure 9. The difference in the intersection positions of corresponding rays in the ground space resulting from the unit weighting and proposed weighting approaches. The start and the end of the arrows at each point are the intersection position of rays resulting from the unit weighting and proposed weighting approaches, respectively. The point coordinates are in the WGS84-UTM projection system.
Figure 9. The difference in the intersection positions of corresponding rays in the ground space resulting from the unit weighting and proposed weighting approaches. The start and the end of the arrows at each point are the intersection position of rays resulting from the unit weighting and proposed weighting approaches, respectively. The point coordinates are in the WGS84-UTM projection system.
Remotesensing 09 00965 g009
Figure 10. The redundancy numbers and their impact on the residuals of the corresponding point observations resulting from two weighting approaches. The redundancy numbers (left side) and the obtained residuals (right side) from (a) the line component in the first image; (b) the sample component in the first image; (c) the line component in the second image; and (d) the sample component in the second image.
Figure 10. The redundancy numbers and their impact on the residuals of the corresponding point observations resulting from two weighting approaches. The redundancy numbers (left side) and the obtained residuals (right side) from (a) the line component in the first image; (b) the sample component in the first image; (c) the line component in the second image; and (d) the sample component in the second image.
Remotesensing 09 00965 g010
Table 1. Test region and dataset specifications.
Table 1. Test region and dataset specifications.
Test RegionProperties
Region NameLaMola
Approximate Area (km2)16
Lower Left Position (WGS84-UTM 31 N)416400 E-4608600 N
Region Elevation Range 1 (m)331–1115
Region TypeSteep Mountainous-Forests
Image Size (Pixels)2488 × 2784
1 Elevation range is derived from LiDAR data provided with the benchmark data.
Table 2. Improving weights of the constraint observation (second group).
Table 2. Improving weights of the constraint observation (second group).
Iteration Number ( j ) k j
10.0142
20.7973
30.9999
41
Table 3. The estimated scaling factors, K , for the geometric constraint using Zhang’s method on the 16 points.
Table 3. The estimated scaling factors, K , for the geometric constraint using Zhang’s method on the 16 points.
Point Number12345678910111213141516
K 0.01710.00890.02100.08460.01510.02620.02000.01420.01830.01870.03340.01750.02330.02290.03500.0160
Table 4. Statistical information on the redundancy numbers for different types of observations—unit weighting case. For instance, the results for three points are presented.
Table 4. Statistical information on the redundancy numbers for different types of observations—unit weighting case. For instance, the results for three points are presented.
NumberLSM ObservationsGC ObservationsAll Observations
r min r max r mean r L 1 r S 1 r L 2 r S 2 r min r max r mean
10.94120.99910.99351.70 × 10−60.44241.78 × 10−60.55731.70 × 10−60.99910.9910
20.95800.99900.99351.62× 10−60.41831.70 × 10−60.52691.62 × 10−60.99900.9910
30.95770.99850.99351.67 × 10−60.43811.75 × 10−60.55191.67 × 10−60.99850.9910
Table 5. Statistical information on the redundancy numbers for different types of the observations—proposed weighting case. For instance, the results for the three points are presented.
Table 5. Statistical information on the redundancy numbers for different types of the observations—proposed weighting case. For instance, the results for the three points are presented.
NumberLSM ObservationsGC ObservationsAll Observations
r min r max r mean r L 1 r S 1 r L 2 r S 2 r min r max r mean
10.94120.99910.99353.39 × 10−80.00883.55 × 10−80.99123.39 × 10−80.99910.9910
20.95760.99900.99343.45× 10−80.00893.63 × 10−80.99003.45 × 10−80.99900.9910
30.95750.99850.99353.41 × 10−80.00893.58 × 10−80.99083.41 × 10−80.99850.9910
Table 6. Properties of the error ellipses resulting from the two weighting approaches.
Table 6. Properties of the error ellipses resulting from the two weighting approaches.
Unit WeightingProposed Weighting
Point Numbera (meters)b (meters)Azimuth (degrees)e 1a (meters)b (meters)Azimuth (degrees)e
10.370.2916.130.390.060.059.480.23
20.450.3515.750.380.070.0610.350.24
32.121.6314.760.410.290.29–6.340.03
40.350.2813.360.370.060.0510.770.15
52.643.43–74.300.700.500.52–7.180.05
60.240.2017.820.340.050.039.170.57
74.223.2414.760.410.600.61–0.280.05
80.190.1613.160.290.040.0310.950.58
91.080.8415.920.400.160.1631.190.03
100.120.1115.770.250.040.028.300.77
110.980.7715.500.390.150.1415.890.10
124.603.5314.270.410.650.673.300.04
131.070.8315.870.400.160.1529.450.03
140.670.5215.090.400.100.1027.440.03
151.391.0714.740.410.200.20–4.570.03
160.870.6814.290.400.130.1328.050.020
1 Eccentricity.

Share and Cite

MDPI and ACS Style

Afsharnia, H.; Arefi, H.; Sharifi, M.A. Optimal Weight Design Approach for the Geometrically-Constrained Matching of Satellite Stereo Images. Remote Sens. 2017, 9, 965. https://0-doi-org.brum.beds.ac.uk/10.3390/rs9090965

AMA Style

Afsharnia H, Arefi H, Sharifi MA. Optimal Weight Design Approach for the Geometrically-Constrained Matching of Satellite Stereo Images. Remote Sensing. 2017; 9(9):965. https://0-doi-org.brum.beds.ac.uk/10.3390/rs9090965

Chicago/Turabian Style

Afsharnia, Hamed, Hossein Arefi, and Mohammad Ali Sharifi. 2017. "Optimal Weight Design Approach for the Geometrically-Constrained Matching of Satellite Stereo Images" Remote Sensing 9, no. 9: 965. https://0-doi-org.brum.beds.ac.uk/10.3390/rs9090965

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop