Next Article in Journal
Multiplicity of Radially Symmetric Small Energy Solutions for Quasilinear Elliptic Equations Involving Nonhomogeneous Operators
Previous Article in Journal
Banach Lattice Structures and Concavifications in Banach Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Analysis of a Non-Iterative Estimator for Target Location in Multistatic Sonar Systems with Sensor Position Uncertainties

1
Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Department of Electronic Engineering, Jiangnan University, Wuxi 214122, China
2
Department of Electrical and Computer Engineering, College of Engineering, University of Canterbury, Christchurch 8020, New Zealand
3
Locaris Technology Co., Ltd., Zhengzhou 450000, China
*
Author to whom correspondence should be addressed.
Submission received: 3 December 2019 / Revised: 6 January 2020 / Accepted: 10 January 2020 / Published: 15 January 2020
(This article belongs to the Section Engineering Mathematics)

Abstract

:
Target location is the basic application of a multistatic sonar system. Determining the position/velocity vector of a target from the related sonar observations is a nonlinear estimation problem. The presence of possible sensor position uncertainties turns this problem into a more challenging hybrid parameter estimation problem. Conventional gradient-based iterative estimators suffer from the problems of initialization difficulties and local convergence. Even if there is no problem with initialization and convergence, a large computational cost is required in most cases. In view of these drawbacks, we develop a computationally efficient non-iterative position/velocity estimator. The main numerical computation involved is the weighted least squares optimization, which makes the estimator computationally efficient. Parameter transformation, model linearization and two-stage processing are exploited to prevent the estimator from iterative computation. Through performance analysis and experimental verification, we find that the proposed estimator reaches the hybrid Cramér–Rao bound and has linear computational complexity.

1. Introduction

In recent years, there has been a lively interest in target location using multistatic sonars [1,2,3,4,5,6,7,8,9,10,11,12,13]. In a multistatic sonar system, the sum of each pair of transmitter–target range and target–receiver range defines an ellipse. Then the target is at the intersection of all these ellipses [7]. The elliptical location encountered in the multistatic sonar systems has also been considered in the MIMO radar [14,15,16,17,18,19,20], multistatic radar [21,22,23,24,25] and indoor positioning systems [26,27].
A considerable amount of literature has been published on the problem of estimating the coordinates of the intersection of the ellipses, which can be statistically modelled as a nonlinear estimation problem. To resolve the essential nonlinearity in the problem, linearization is a natural idea. In particular, the measurement equations were linearized by Taylor expansion, resulting in an iterative algorithm [20]. Alternative to the Taylor expansion, introducing nuisance parameters is another approach to linearization. For example, the classic spherical-interpolation [28] and spherical-intersection [29] methods were ported to the elliptical location problems [25]. However, the estimation accuracy is not optimum in [25]. Slightly more complex than the linear models, a quadratically constrained least squares model was constructed, which is generally difficult to solve effectively [27,30]. More recently, as another major methodology for parameter estimation, a Bayes estimator was presented for elliptical location, involving formidable numerical integration [4]. Intuitively, integrating other kinds of observations helps improve the positioning accuracy. For instance, the Doppler shift measurements were incorporated to improve the position estimate and identify the velocity additionally [5].
In addition to the difficulties raised by the high nonlinearity in the statistical models, another obstacle in the multistatic sonar location is that the complex ocean environments introduce uncertainties in the positions of the transmitters and receivers. Preliminary work considering sensor location errors in elliptical location was reported in the literature [6,10,13]. Recent advances have seen an efficient non-iterative estimator for the multistatic sonar location [5,6] inspired by the renowned work of [31].
Perturbation analysis of least squares problems is a major topic in numerical linear algebra. Related work has focused on establishing various error bounds [32,33,34]. We combine the basic techniques of perturbation analysis with multivariate statistics [35] to quantitatively evaluate the estimators for a nonlinear estimation problem.
On the basis of the above work, our technical contributions are summarized here.
  • We establish a statistical model of determining both the position and velocity of a moving target in a multistatic sonar system using differential delays and Doppler shifts. The uncertainties in the sensor positions are carefully taken into account in our model. The performance limit is developed for this problem.
  • To tackle the proposed nonlinear hybrid parameter estimation problem, we design an efficient non-iterative solution using parameter transformation, model linearization and two-stage processing.
  • We further analyze the bias vector and covariance matrix of our estimator theoretically using the second/first-order perturbation analysis and multivariate statistics.
  • We prove that the proposed estimator has approximate statistical efficiency and linear complexity.
The rest of this paper is organized as follows. Section 2 lists the notational conventions that will be used throughout the paper. Section 3 provides the location scenario and formulates the problem as a nonlinear estimation problem. In Section 4, we evaluate the performance limit for the proposed problem. Section 5 is devoted to developing our estimator. Then, Section 6 analyzes the bias vector and covariance matrix of our estimator up to the second/first-order random errors. Section 7 contains comprehensive Monte Carlo simulation results, and finally Section 8 draws the conclusion.

2. Notational Conventions

We will use bold lowercase letters to denote the column vectors and bold uppercase ones to denote the matrices. Specifically, 0 p × q is a p × q zeros matrix, 1 p × q is a p × q ones matrix, and  I is an identity matrix of appropriate size. The operators ⊗ and ∘ represent the Kronecker product and Hadamard product respectively. The expression ( M 1 M 2 ) means that ( M 1 M 2 ) is a positive semidefinite matrix. diag ( v ) is the square diagonal matrix with the elements of vector v on the main diagonal. B = blkdiag ( M 1 , , M N ) is the block diagonal matrix created by aligning the matrices M 1 , , M N along the diagonal of B . When we want to access selected elements of a vector/matrix, we imitate the syntax of MATLAB programming language. For simplicity of presentation, we use numerous symbols and notations. They are summarized in Table 1 for quick reference. For the sake of readability, the text also includes relevant explanations about these symbols and notations.

3. Problem Formulation and Statistical Model

We now turn to the mathematical formulation of the problem. In the multistatic sonar location scenario here, the transmitters and receivers are stationary and the target is moving. Let M be the number of transmitters and N be the number of receivers. We consider a two-dimensional location scenario. The unknown position vector and velocity vector of the target are denoted by u = [ x u , y u ] T and u ˙ = [ x u ˙ , y u ˙ ] T . For simplicity, the complete unknown parameter vector will be denoted by
θ = [ u T , u ˙ T ] T .
To characterize the sensor location errors, the position vectors of the i-th transmitter and j-th receiver are modeled as random vectors t i = [ x t i , y t i ] T and s j = [ x s j , y s j ] T respectively, where i = 1 , 2 , , M and j = 1 , 2 , , N . We write compactly
z = [ t T , s T ] T ,
where t = [ t 1 T , t 2 T , , t M T ] T and s = [ s 1 T , s 2 T , , s N T ] T . Generally, it can be assumed that
z N ( z ¯ , Q z ) ,
where the nominal positions of the sensors z ¯ and the covariance matrix Q z are known [6]. Then the sensor position errors vector is Δ z = z z ¯ .
Physically, each transmitter radiates a sonar signal and all receivers observe the signals both from direct propagation and from indirect reflection of the target. Thus, the observation model of differential delay time between t i and s j is
τ i , j = 1 c u t i + u s j t i s j + Δ τ i , j ,
where c is the signal propagation speed and Δ τ i , j is the observation noise of τ i , j  [6]. Furthermore, as the target is moving, we can also obtain the observation model of range rate (i.e., the Doppler shift measurements divided by the carrier frequency) between t i and s j , that is,
f i , j = 1 c ρ u , t i + ρ u , s j T u ˙ + Δ f i , j ,
where Δ f i , j is the observation noise of f i , j . For the notations ρ u , t i and ρ u , s j , see Table 1.
For the transmitter at position t i , all the related observations can be collected in an observation vector
m i = [ τ i T , f i T ] T ,
where τ i = [ τ i , 1 , τ i , 2 , , τ i , N ] T , and  f i = [ f i , 1 , f i , 2 , , f i , N ] T for i = 1 , 2 , , M . Then, the observations related to all the transmitters can be denoted by
m = [ m 1 T , m 2 T , , m M T ] T .
Furthermore, it is assumed that the conditional distribution (given z ) of the observed vector m is of the form
m z N ( m ¯ , Q m ) ,
where m ¯ is the ideal error-free observation vector and Q m is the covariance matrix of m . The corresponding observation error vector can be denoted by Δ m = m m ¯ .
As part of the observation model, the following small error assumptions are claimed.
  • Δ t i u t ¯ i ,
  • Δ t i t ¯ i s ¯ j ,
  • Δ s j u s ¯ j ,
  • Δ s j t ¯ i s ¯ i ,
  • | Δ τ i , j | τ ¯ i , j ,
  • | Δ f i , j | f ¯ i , j .
The physical motivation for these assumptions is that the position uncertainty of a given transmitter is small relative to its distance to the target and its distances to all the receivers, the position uncertainty of a given receiver is small relative to its distance to the target and its distances to all the transmitters, and the relative measurement errors are small. Besides, Δ z and Δ m are assumed to be statistically independent for ease of illustration.
Given the statistical model in Equation (8), the problem is to estimate the target position vector u and velocity vector u ˙ , i.e., θ , in real time and at a reasonable computational cost. Another significant work is the theoretical analysis of statistical performance of the designed estimator.
We conclude this section with some comments. Generally, the small error assumptions can be satisfied by increasing the observation period in obtaining the differential delay time and range rate measurements in a nonsingular location geometry. In addition, as we will see in Section 5, our estimator requires accurate knowledge of the positive definite covariance matrices Q m and Q z . They can usually be obtained during the calibration stage of a multistatic sonar system. Specifically, some scattering models from the environment may also help determine Q m .

4. Hybrid Cramér–Rao Bound

In order to set a benchmark before designing an estimator, we now evaluate the Hybrid Cramér–Rao Bound (HCRB) [36,37,38,39] for the hybrid parameter estimation problem proposed in Section 3. The HCRB provides a lower bound on the error covariance matrix of the estimator of a hybrid unknown parameter vector.
In our statistical model, the wanted parameter vector θ and the nuisance parameter vector (i.e., the actual sensor positions) z are both unknown. What makes them different is that θ is deterministic, and  z is a random parameter vector. Such models arise in many applications where we want to investigate model uncertainty or environmental mismatch. Here, we consider θ and z together as a hybrid parameter vector
γ = [ θ T , z T ] T .
Before moving on to the estimator design, we outline the procedures for deriving the HCRB. In such a hybrid parameter case, the HCRB is calculated using the joint probability density of the observed measurement vector m and the sensor position vector z . The hybrid information matrix J H can be expressed as the sum
J H = J D + J P ,
where J D represents the contribution of observations m and J P represents the contribution of the prior knowledge on z . Note that the unknown parameter vector γ = [ θ T , z T ] T is in the mean vector m ¯ of the multivariate normal distribution in Equation (8) and z is a multivariate normal random vector itself as in Equation (3). Section 3 reveals that m ¯ depends on θ , z and c. In our model, the random parameter vector z does not depend on the deterministic parameter vector θ . Thus, J D and J P is fairly easy to get [40]. Consequently,
J D = E z m ¯ γ T Q m 1 m ¯ γ ,
J P = blkdiag ( 0 4 × 4 , Q z 1 ) .
When the levels of sensor positions’ uncertainties are small, according to the approximation principle suggested by [41], the expected value matrix in Equation (11) can be approximated by replacing random vector z with its expected value vector z ¯ . Then, from the blockwise inversion of J H | z = z ¯ and the matrix inversion lemma, we have the HCRB for the estimation of θ = [ u T , u ˙ T ] T as follows:
HCRB θ m ¯ θ T Q m + m ¯ z Q z m ¯ z T 1 m ¯ θ 1 | z = z ¯ .
For numerical computation using Equation (13), m ¯ θ and m ¯ z are required. More information is available in Appendix A.

5. Estimator Design

In this section, we use Taylor expansion, introduce auxiliary variables and apply multi-stage processing to deal with the nonlinear estimation problem proposed in Section 3. In particular, our algorithm can be divided into two stages, each involving an unconstrained linear weighted least squares (WLS) computation which is computationally attractive. During the algorithm design and performance analysis of our estimator, it is necessary to use many matrix symbols to simplify the presentation. These matrices are shown in Table 2 for easy reference. When justifying the introduction of these matrices, we find that these matrices naturally arise in a general weighted least squares problem. To prevent ourselves from obscuring the design of the estimator, the reader is referred to the Appendix B.
Based on the Conditions 1 through 4 in Section 3, it follows from the first-order Taylor’s formula that
u t i u t ¯ i + ρ t ¯ i , u T Δ t i ,
u s j u s ¯ j + ρ s ¯ j , u T Δ s j ,
t i s j t ¯ i s ¯ j + ρ t ¯ i , s ¯ j T ( Δ t i Δ s j ) ,
ρ u , t i ρ u , t ¯ i A u , t ¯ i Δ t i ,
ρ u , s j ρ u , s ¯ j A u , s ¯ j Δ s j .
If we plug Equation (14) through Equation (16) into Equation (4), we obtain
c τ i , j u t ¯ i + u s ¯ j t ¯ i s ¯ j + ϵ τ , i , j ,
where
ϵ τ , i , j = ρ t ¯ i , u T Δ t i + ρ s ¯ j , u T Δ s j ρ t ¯ i , s ¯ j T ( Δ t i Δ s j ) + c Δ τ i , j .
Furthermore, inserting Equation (17) and Equation (18) into Equation (5) gives
c f i , j ρ u , t ¯ i T u ˙ + ρ u , s ¯ j T u ˙ + ϵ f , i , j ,
where
ϵ f , i , j = ( A u , t ¯ i u ˙ ) T Δ t i ( A u , s ¯ j u ˙ ) T Δ s j + c Δ f i , j .

5.1. First Stage

Without loss of generality, let M < N . Move ( u t ¯ i t ¯ i s ¯ j ) from the right side to the left side in Equation (19), and square both sides. Then, we see that
2 u s ¯ j ϵ τ , i , j + ϵ τ , i , j 2 2 t ¯ i T ( t ¯ i s ¯ j ) + 2 c τ i , j t ¯ i s ¯ j + c 2 τ i , j 2 2 ( t ¯ i s ¯ j ) T u 2 ( c τ i , j + t ¯ i s ¯ j ) u t ¯ i .
Applying similar procedures to Equation (21) gives
u s ¯ j ϵ f , i , j + ρ u , s j ¯ T u ˙ ϵ τ , i , j + ϵ τ , i , j ϵ f , i , j c f i , j t ¯ i s ¯ j + c 2 τ i , j f i , j c f i , j u t ¯ i ( t ¯ i s ¯ j ) T u ˙ ( c τ i , j + t ¯ i s ¯ j ) ρ u , t ¯ i T u ˙ .
If we define an unknown parameter vector as
ϕ 1 = [ u T , α T , u ˙ T , β T ] T ,
where
α = [ α 1 , α 2 , , α M ] T ,
β = [ β 1 , β 2 , , β M ] T ,
α i = u t ¯ i , i = 1 , 2 , , M ,
β i = ρ u , t ¯ i T u ˙ , i = 1 , 2 , , M ,
then a linear system of equations can be obtained from Equation (23) and Equation (24) as
B 1 ϵ 1 ( 1 ) + ϵ 1 ( 2 ) h 1 G 1 ϕ 1 .
We leave the details of h 1 , G 1 , B 1 , ϵ 1 ( 1 ) and ϵ 1 ( 2 ) presented in Appendix C. Note that ϵ 1 ( 1 ) and ϵ 1 ( 2 ) are first-order and second-order approximation error respectively.
By ignoring the second-order error term ϵ 1 ( 2 ) , the WLS solution to Equation (30) is
ϕ ^ 1 = H 1 h 1 ,
and has covariance matrix
cov ( ϕ ^ 1 ) = cov ( Δ ϕ 1 ) P 1 o 1 ,
where H 1 = P 1 1 G 1 T W 1 , P 1 = G 1 T W 1 G 1 and P 1 o is the zero-order approximation of P 1 . The weighting matrix W 1 is the inverse of the covariance matrix of the approximation error B 1 ϵ 1 ( 1 ) , that is,
W 1 = [ E ( B 1 ϵ 1 ( 1 ) ϵ 1 ( 1 ) T B 1 T ) ] 1 = B 1 T cov ( ϵ 1 ( 1 ) ) 1 B 1 1 .
The computation of cov ( ϵ 1 ( 1 ) ) is straightforward. Because of the assumed statistical independence between Δ z and Δ m in Equation (A15),
cov ( ϵ 1 ( 1 ) ) = D z Q z D z T + c 2 Q m ,
where D z is shown in Equation (A16).
We get Equation (32) by the first-order perturbation analysis.
cov ( Δ ϕ 1 ) E [ H 1 o B 1 ϵ 1 ( 1 ) · ( H 1 o B 1 ϵ 1 ( 1 ) ) T ] = H 1 o · B 1 cov ( ϵ 1 ( 1 ) ) B 1 T · H 1 o T = P 1 o 1 G 1 o T W 1 · W 1 1 · W 1 T G 1 o P 1 o T = P 1 o 1 · G 1 o T W 1 T G 1 o · P 1 o T = P 1 o 1 P 1 o T P 1 o T = P 1 o 1 .

5.2. Second Stage

With ϕ ^ 1 and its covariance matrix P 1 o 1 , the aim of the second stage is to estimate the estimation error vector introduced in the first stage. In order to use symbols similar to the first stage, we denote this estimation error vector as ϕ 2 , i.e.,
ϕ 2 = [ Δ u T , Δ u ˙ T ] T = Δ ϕ 1 ( [ 1 , 2 , M + 3 , M + 4 ] ) .
By substituting u = u ^ Δ u and α i = α ^ i Δ α i into α i 2 = u t ¯ i 2 , we obtain
2 α i Δ α i + Δ u 2 + ( Δ α i ) 2 = α i ^ 2 u ^ t ¯ i 2 + 2 ( u ^ t ¯ i ) T Δ u .
Furthermore, plugging α i = α ^ i Δ α i , β i = β ^ i Δ β i , u = u ^ Δ u and u ˙ = u ˙ ^ Δ u ˙ into α i β i = ( u t ¯ i ) T u ˙ gives
β i Δ α i + α i Δ β i + Δ α i Δ β i + Δ u T Δ u ˙ = α ^ i β ^ i ( u ^ t ¯ i ) T u ˙ ^ + u ˙ ^ T Δ u + ( u ^ t ¯ i ) T Δ u ˙ .
In matrix notation, from Equation (37) and Equation (38), we have
B 2 ϵ 2 ( 1 ) + ϵ 2 ( 2 ) = h 2 G 2 ϕ 2 ,
where
ϵ 2 ( 1 ) = Δ ϕ 1 ,
i.e., the estimation error in the first stage Δ ϕ 1 is considered as the first-order approximation error in the second stage. This is a key point of our estimator. The details of h 2 , G 2 , B 2 and the second-order approximation error ϵ 2 ( 2 ) can be found in Appendix D.
By ignoring the second-order error term ϵ 2 ( 2 ) and following the first stage’s approach, the WLS solution to Equation (39) is
ϕ ^ 2 = H 2 h 2 ,
and has error covariance matrix
cov ( Δ ϕ 2 ) P 2 o 1 ,
where H 2 = P 2 1 G 2 T W 2 , P 2 = G 2 T W 2 G 2 and P 2 o is the zero-order approximation of P 2 . The weighting matrix W 2 is the inverse of the covariance matrix of the approximation error B 2 ϵ 2 ( 1 ) , that is,
W 2 E ( B 2 ϵ 2 ( 1 ) ϵ 2 ( 1 ) T B 2 T ) 1 B 2 T P 1 o B 2 1 .
Finally, our estimator can be constructed from ϕ ^ 1 in Equation (31) and ϕ ^ 2 in Equation (41) as
θ ^ = [ ϕ ^ 1 ( 1 : 2 ) T , ϕ ^ 1 ( M + 3 : M + 4 ) T ] T ϕ ^ 2 .
Last but not least, some obstacles arise in the practical computation of our estimator. In the first stage, B 1 and D z , as shown in Equation (A13) and Equation (A16), involve u and u ˙ which are unavailable for the algorithm. To resolve this problem, we first assign an identity matrix to B 1 and an all-zero matrix to D z to get coarse estimates of u and u ˙ from Equation (31), and then substitute the coarse estimates into Equation (A13) and Equation (A16) to update B 1 and D z . Confronted with similar problems in the second stage in Equation (A20), we substitute α ^ and β ^ for computing B 2 , resulting B ^ 2 and W ^ 2 . These approximations will be considered properly in the statistical performance analysis in Section 6.

5.3. Summary

As a guide to implementation, the flowchart of the proposed estimator is shown in Figure 1 and the algorithm of the first stage of our estimator is summarized in Algorithm 1.
Algorithm 1 First stage of the estimator.
1:
procedureEstimator-First-Stage( Q m , Q z , c , G 1 , h 1 , D z , B 1 )
2:
    Compute cov ( ϵ 1 ( 1 ) ) from Q m , Q z , D z , c by Equation (34)
3:
    Compute W 1 from cov ( ϵ 1 ) , B 1 by Equation (33)
4:
    return ϕ ^ 1 from G 1 , h 1 , W 1 by Equation (31)
5:
    return cov ( ϕ ^ 1 ) from G 1 , W 1 by Equation (32)
6:
end procedure

6. Performance Analysis

The covariance matrix and the bias vector are the two most important numerical characterization of a vector estimator. The perturbations in the design matrices (i.e., G 1 and G 2 ) and the higher-order noise terms in the observation vector (i.e., h 1 and h 2 ) in Equation (30) and Equation (39) make the conditions of Gauss–Markov theorem no longer true.

6.1. Bias Vector

We first derive the covariance matrix of our estimator in Equation (44). The bias analysis here will be up to the second-order statistics of the observation errors and the sensor position errors, i.e., the random terms higher than the second-order are ignored. Matrix differential calculus presented in [42] is intensively used in this subsection.
It can be seen from Equation (44) that the total bias vector of our estimator θ ^ is
[ μ 1 ( 1 : 2 ) T , μ 1 ( M + 3 : M + 4 ) T ] T μ 2 ,
where μ 1 and μ 2 are the bias vectors of ϕ ^ 1 and ϕ ^ 2 , respectively. The rest of our task is to compute the bias vector in the first stage and second stage, i.e., μ 1 and μ 2 . We reiterate that the random terms higher than the second-order will be ignored at each occurrence of the approximately equal symbol. Please reference Table 2 for the matrix symbols involved.
The error of ϕ ^ 1 is
Δ ϕ 1 = ϕ ^ 1 ϕ 1 H 1 ( B 1 ϵ 1 ( 1 ) + ϵ 1 ( 2 ) ) H 1 o ( B 1 ϵ 1 ( 1 ) + ϵ 1 ( 2 ) ) + Δ H 1 · B 1 ϵ 1 ( 1 ) ,
where Δ H 1 and its related differentials can be obtained by matrix differential as follows:
Δ H 1 = P 1 o 1 Δ G 1 T W 1 + Δ ( P 1 1 ) G 1 o T W 1 ,
Δ ( P 1 1 ) = P 1 o 1 Δ P 1 P 1 o 1 ,
Δ P 1 = Δ G 1 T W 1 G 1 o + G 1 o T W 1 Δ G 1 .
Putting Equation (47) through Equation (49) into Equation (46) gives the estimation error vector in the first stage
Δ ϕ 1 H 1 o · B 1 ϵ 1 ( 1 ) + H 1 o ϵ 1 ( 2 ) + P 1 o 1 Δ G 1 T K 1 · B 1 ϵ 1 ( 1 ) H 1 o Δ G 1 H 1 o · B 1 ϵ 1 ( 1 ) .
We note that the weight matrix W 1 has no random errors in the first stage.
As in the first stage, the estimation error vector in the second stage is
Δ ϕ 2 = ϕ ^ 2 ϕ 2 = H 2 ( B 2 ϵ 2 ( 1 ) + ϵ 2 ( 2 ) ) H 2 o ( B 2 ϵ 2 ( 1 ) + ϵ 2 ( 2 ) ) + Δ H 2 · B 2 ϵ 2 ( 1 ) .
Before moving on to the expression of Δ H 2 , we note that we use B ^ 2 rather than B 2 in the practical implementation. So, Δ H 2 can be obtained by matrix differential as follows:
Δ H 2 = Δ ( P 2 1 ) G 2 o T W 2 + P 2 o 1 Δ G 2 T W 2 + P 2 o 1 G 2 o T Δ W 2 ,
Δ ( P 2 1 ) = P 2 o 1 Δ P 2 P 2 o 1 ,
Δ P 2 = Δ G 2 T W 2 G 2 o + G 2 o T W 2 Δ G 2 + G 2 o T Δ W 2 G 2 o ,
Δ W 2 = B 2 T Δ P 1 B 2 1 W 2 Δ B 2 B 2 1 B 2 T Δ B 2 T W 2 .
By comparing Equation (52) through Equation (54) with Equation (47) through Equation (49), we see that Δ W 2 in Equation (55) contributes new perturbation resulted from practical implementation, which are introduced by Δ P 1 and Δ B 2 . Furthermore, Δ P 1 can be expressed with Δ G 1 by Equation (49).
Then from Equation (51), combining Equation (52) through Equation (55) together with Equation (49) we have
Δ ϕ 2 H 2 o · B 2 ϵ 2 ( 1 ) + H 2 o ϵ 2 ( 2 ) + P 2 o 1 Δ G 2 T K 2 · B 2 ϵ 2 ( 1 ) H 2 o Δ G 2 H 2 o · B 2 ϵ 2 ( 1 ) + H 2 o Δ B 2 B 2 1 U · B 2 ϵ 2 ( 1 ) V · B 2 T Δ B 2 T K 2 · B 2 ϵ 2 ( 1 ) + V · B 2 T · G 1 o T W 1 Δ G 1 · B 2 1 U · B 2 H 1 o B 1 ϵ 1 ( 1 ) + V · B 2 T · Δ G 1 T W 1 G 1 o · B 2 1 U · B 2 H 1 o B 1 ϵ 1 ( 1 ) .
The last line above is by Equation (46) and Equation (49).
Finally, taking expectation of Δ ϕ 1 and Δ ϕ 2 will yield μ 1 and μ 2 . This involves complicated matrix partitioning and multivariate statistical analysis. Interested readers may refer to Appendix E.

6.2. Covariance Matrix

We then derive the covariance matrix of our estimator. The covariance analysis here will be up to the first-order statistics of the estimator errors, i.e., the error terms higher than the first-order are ignored.
cov ( θ ^ ) = cov ( ϕ ^ 1 ( 1 , 2 , M + 3 , M + 4 ) ϕ ^ 2 ) = cov ( ϕ 1 ( 1 , 2 , M + 3 , M + 4 ) + Δ ϕ 1 ( 1 , 2 , M + 3 , M + 4 ) ϕ 2 Δ ϕ 2 ) = cov ( ϕ 1 ( 1 , 2 , M + 3 , M + 4 ) Δ ϕ 2 ) ( by ( 36 ) ) = cov ( Δ ϕ 2 ) P 2 o 1 .
We note that Yang et al. have explicitly given the covariance matrix of the two-stage weighted least squares (TS-WLS) method as follows:
cov ( θ ^ TS WLS ) ( G 2 o T B 2 T V 1 1 B 2 1 G 2 o ) 1 ,
where
V 1 = ( G 1 o T W ˜ 1 G 1 o ) 1 G 1 o T W ˜ 1 B 1 cov ( ϵ 1 ( 1 ) ) B 1 T W ˜ 1 T G 1 o ( G 1 o T W ˜ 1 G 1 o ) T ,
W ˜ 1 = B 1 T ( c 2 Q m ) 1 B 1 1 .
We now claim the following proposition.
Proposition 1.
cov ( θ ^ TS WLS ) cov ( θ ^ ) .
Proof. 
By Equation (57) and Equation (58), it is enough to prove
P 2 o G 2 o T B 2 T V 1 1 B 2 1 G 2 o .
To get started, we expand the left side of Equation (62).
P 2 o = G 2 o T W 2 G 2 o G 2 o T ( B 2 T P 1 o B 2 1 ) G 2 o ( by Equation ( 43 ) ) = G 2 o T [ B 2 ( G 1 o T W 1 G 1 o ) 1 B 2 T ] 1 G 2 o = G 2 o T B 2 T G 1 o T W 1 G 1 o B 2 1 G 2 o = G 2 o T B 2 T V 2 1 B 2 1 G 2 o ( by Equation ( 33 ) ) ,
where
V 2 = [ G 1 o T B 1 T cov ( ϵ 1 ( 1 ) ) 1 B 1 1 G 1 o ] 1 .
As B 2 1 G 2 o has full column rank, to complete the proof, we need only show that
V 1 V 2 .
We can prove Equation (65) by Matrix Schwarz inequality (Lemma 1.1 in [43]). Because B 1 cov ( ϵ 1 ( 1 ) ) B 1 T is positive-definite, we can provide a Cholesky factorization as follows:
B 1 cov ( ϵ 1 ( 1 ) ) B 1 T = LL T ,
where L is a unique lower triangular matrix. Let
P = L 1 G 1 o , Q = L T W ˜ 1 T G 1 o ( G 1 o T W ˜ 1 G 1 o ) T .
By Matrix Schwarz inequality, we get
Q T Q ( P T Q ) T ( P T P ) 1 ( P T Q ) .
Then with a straightforward verification, we have
Q T Q = V 1 , P T P = V 2 1 , P T Q = I .
The equations above imply that V 1 V 2 , as desired. □
We close this section by establishing the approximate statistical efficiency of our estimator as the proposition below shows.
Proposition 2.
When the small error conditions in 1 through 6 are satisfied,
HCRB θ cov ( θ ^ ) .
Proof. 
In view of Equation (13) and Equation (57), we will prove that
m ¯ θ T Q m + m ¯ z Q z m ¯ z T 1 m ¯ θ | z = z ¯ P 2 o .
By Equations (63), (64) and (34), the right side of Equation (71) is
P 2 o = G 3 T ( D z Q z D z T + c 2 Q m ) 1 G 3 ,
where
G 3 = B 1 1 G 1 o B 2 1 G 2 o .
Expanding Equation (73) (in Appendix F) and comparing it with Equation (A1), we can show that, under the small error conditions,
G 3 c m ¯ θ | z = z ¯ .
Furthermore, comparing Equation (A16) with Equation (A2), we arrive at
D z c m ¯ z | z = z ¯ .
If we plug Equation (74) and Equation (75) into Equation (72), we get Equation (71). □

6.3. Time and Space Complexity

The computational load of our algorithm is focused on solving WLS problems. Singular value decomposition (SVD) is an efficient algorithm for solving WLS problems. With the method of truncated SVD, the time complexity is O ( 2 m n 2 + n 3 + m n + n ) and the space complexity is O ( 3 n 2 + 2 m n + 3 n ) for a matrix of size m × n [44].
To facilitate complexity analysis, we list the matrices involved in the algorithm and their sizes in Table 3. As can be seen from the table, under the assumptions that M N , the computational complexity of the first stage is dominant in the total computational complexity. For the design matrix G 1 in the first stage, m = 2 M N and n = 2 M + 4 . By keeping the highest order term of N and its coefficient, it can be seen that our algorithm takes O ( N [ 4 ( 4 M 3 + 17 M 2 + 18 M ) ] ) time and O ( N [ 8 M ( M + 2 ) ] ) space. In summary, our algorithm has linear complexity both in time and space.

7. Results and Discussion

In the previous section, we have theoretically analyzed the performance of our estimator. Now we ascertain the performance of our estimator via computer simulations. Our simulations are divided into four subsections. Section 7.1 compares the error covariance matrix of our estimator with HCRB and the ones of two typical estimators, i.e., the spherical-interpolation initialized Taylor series method (SI-TS) [28,45] and TS-WLS [5]. Then, surface plots of the biases are shown in Section 7.2. In Section 7.3, we empirically explore the time complexity of our estimator for locating multiple disjoint targets. Finally, we use 80 randomly generated large-scale localization scenarios to further test the proposed estimator in Section 7.4.
The first three subsections base on the simulation settings of [6]. Specifically, the simulations use M = 3 transmitters and N = 5 receivers to determine the unknown position u and velocity u ˙ of a moving target. As in [6], the nominal positions of the sensors are known and given as follows. t ¯ 1 = [ 1500 , 1500 ] T m, t ¯ 2 = [ 900 , 4000 ] T m, t ¯ 3 = [ 3000 , 4000 ] T m, s ¯ 1 = [ 1000 , 3000 ] T m, s ¯ 2 = [ 2500 , 500 ] T m, s ¯ 3 = [ 3000 , 1000 ] T m, s ¯ 4 = [ 2000 , 4000 ] T m, and s ¯ 5 = [ 2000 , 2000 ] T m. Graphically, the nominal location geometry is shown in Figure 2.
The additional common settings for Section 7.1 and Section 7.2 are as follows. The target is at u = [ 0 , 2000 ] T m with velocity u ˙ = [ 20 , 10 ] T m/s and the signal propagation speed is c = 1500 m/s. The observation error covariance matrix related to the transmitter at t i is Q m i = blkdiag ( σ τ 2 R , σ f 2 R ) for i = 1 , 2 , , M , where σ τ is a given positive constant, σ f 2 = σ τ 2 / 10 , and R = 0 . 5 1 N × N + 0 . 5 I N [31]. The sensors’ position error covariance matrix is Q z = σ z 2 I 2 ( M + N ) , where σ z is a given positive constant.
We list the settings of the Monte Carlo simulations in Table 4 to illustrate our experiments more clearly. Using Equation (8) based on Table 4, we generate data for simulations.

7.1. Performance Comparison

We now turn to the performance comparison of several estimators. For a specific estimator θ ^ of the unknown parameter vector θ , its performance can be measured by the root-mean-square error (RMSE), which is defined as follows.
RMSE = 1 L = 1 L θ θ 2 ,
where L is the number of Monte Carlo simulations and θ is the -th random realization of θ ^ .
The RMSEs of our estimator, SI-TS and TS-WLS are compared with HCRB here. The simulation settings are as follows. σ τ is 0.02 s, σ z is from 0 m to 200 m with a step size of 20 m, and the number of Monte Carlo simulations is 10 4 for each value of σ z . The comparison curves for both the position estimator and the velocity estimator are plotted respectively in Figure 3 and Figure 4. It is evident that our estimator has the least RMSE and can attain the HCRB accuracy at lower noise levels for determining both the position and the velocity.

7.2. Bias Calculation

In this subsection, we evaluate the bias of our estimator. The simulation settings are as follows. σ τ is from 0.02 s to 0.2 s with a step size of 0.02 s, and σ z is from 20 m to 200 m with a step size of 20 m. The norms of the theoretical bias vectors of u ^ and u ˙ ^ are calculated using results from Section 6 and further visualized as surface plots in Figure 5 and Figure 6. It is consistent with intuition that the biases of both u ^ and u ˙ ^ increase with both σ τ and σ z . It should be noted that the biases are relatively small compared with the norms of u = [ 0 , 2000 ] T m and u ˙ = [ 20 , 10 ] T m/s, even if the noise levels are high, e.g., σ τ = 0.2 s and σ z = 200 m.

7.3. Localizing Multiple Disjoint Targets

The aim of this section is to evaluate the computational complexity of the algorithm in the sense of scalability, since the WLS algorithm involved in our estimator is computationally efficient. One advantage of our estimator is that it is ready to be extended to location of multiple disjoint targets by concatenating the data matrices in Section 5. Let the number of the disjoint targets be K, and 10 3 Monte Carlo experiments of joint location are performed for each value of K ( = 1 , 2 , 4 , 8 , 16 , 32 , 64 ) . Then, the running time of the 10 3 experiments are recorded. For convenience of comparison, we normalize the running time for each K with the one for K = 1 . The normalized running times are plotted in Figure 7 using log-log scale. It can be seen that the running time grows almost exponentially with respected to the number of targets. This observation indicates that localizing multiple targets sequentially is more time-efficient than localizing them simultaneously using our estimator. Such defects may root in the fact that our joint estimator does not share the nuisance parameters across the multiple targets.

7.4. Large-Scale Simulation Experiments

The location scenario in Section 7.1 through Section 7.3 is the one examined in [6]. In order to evaluate the performance of the proposed estimator more comprehensively, we design the following lareg-scale random experiments. In view of the symmetry of the transmitter t i and receiver s j in the observation model, we fix the number of transmitters to 1, and increase the number of receivers from 21 to 100. The transmitter’s position is fixed at [ 0 , 0 ] T m. Both the x-ordinate and y-ordinate of the individual receiver’s position have the uniform probability distribution within the interval [ 5000 , 5000 ] T m. σ z = 20 m, and σ τ = 0.02 s. Other unspecified settings in these experiments are referred in Table 4. In each location scenario, we conduct 10 4 Monte Carlo simulations. Then we explore the effect of the number of reveivers on the bias/RMSE and computational complexity of the proposed estimator in Figure 8 and Figure 9.
Figure 8 shows that increasing the number of receivers helps to reduce the RMSE of the estimator. It should be noted that increasing the number of receivers does not lead to a decrease of bias. This fact may imply that designing unbiased estimators is an inherently difficult problem in nonlinear estimation.
In addition, as can be seen in Figure 9, the estimator’s relative running time scales linearly as more receivers are used when the number of the receivers is large enough (e.g., N > 80 here). This trend coincides with the theoretical linear complexity obtained in Section 6.3.

8. Conclusions

This paper develops a non-iterative solution to the nonlinear hybrid parameter estimation problem of determining the position and velocity of a moving target in a multistatic sonar system in the presence of sensor position uncertainties. It outperforms conventional methods such as SI-TS and TS-WLS in RMSE, and can achieve the HCRB for moderate Gaussian observation noises and sensor position errors. Our estimator involves only two WLS minimizations. Thus, it is computationally efficient, and does not need to deal with the difficulties of initialization and local convergence. Moreover, we obtain the bias vector and covariance matrix of this estimator using perturbation analysis and multivariate statistics.

Author Contributions

Conceptualization, X.W. and J.L.; methodology, X.W.; software, X.W. and Z.Y.; validation, L.Y.; writing—original draft preparation, X.W. and Z.Y.; writing—review and editing, X.W. and L.Y.; funding acquisition, X.W. and L.Y. ’All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 61703185), the Natural Science Foundation of Jiangsu Province in China (Grant No. BK20140166) and the 111 Project (Grant no. B12018).

Acknowledgments

The authors would like to thank the anonymous reviewers and the editor for their careful reviews and constructive suggestions to help us improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HCRBHybrid Cramér–Rao bound
WLSWeighted least-squares
SI-TSSpherical-interpolation initialized Taylor series method
SVDSingular value decomposition
TS-WLSTwo-stage weighted least squares method
RMSERoot-mean-square error

Appendix A. Jacobian Matrices for HCRB

Appendix A.1. Jacobian Matrix of Target Position and Velocity

m ¯ θ = m ¯ 1 / u m ¯ 1 / u ˙ m ¯ 2 / u m ¯ 2 / u ˙ m ¯ M / u m ¯ M / u ˙ ,
where
m ¯ i u = τ i , 1 o u T , , τ i , N o u T , f i , 1 o u T , , f i , N o u T T , m ¯ i u ˙ = τ i , 1 o u ˙ T , , τ i , N o u ˙ T , f i , 1 o u ˙ T , , f i , N o u ˙ T T ,
for i = 1 , 2 , , M .
τ i , j o / u = 1 c ( ρ u , t i + ρ u , s j ) T , τ i , j o / u ˙ = 0 1 × 2 , f i , j o / u = 1 c u ˙ T A u , t i + A u , s j , f i , j o / u ˙ = 1 c ( ρ u , t i + ρ u , s j ) T ,
for i = 1 , 2 , , M and j = 1 , 2 , , N .

Appendix A.2. Jacobian Matrix of Sensor Positions

m ¯ z = m ¯ 1 / t m ¯ 1 / s m ¯ 2 / t m ¯ 2 / s m ¯ M / t m ¯ M / s ,
where
m ¯ i t = τ i , 1 o t T , , τ i , N o t T , f i , 1 o t T , , f i , N o t T T , m ¯ i s = τ i , 1 o s T , , τ i , N o s T , f i , 1 o s T , , f i , N o s T T ,
for i = 1 , 2 , , M .
τ i , j o / t k = ( ρ t i , u ρ t i , s j ) T / c , if i = k , 0 1 × 2 , if i k , τ i , j o / s l = ( ρ s j , u ρ s j , t i ) T / c , if j = l , 0 1 × 2 , if j l , f i , j o / t k = u ˙ T A t i , u / c , if i = k , 0 1 × 2 , if i k , f i , j o / s l = u ˙ T A s j , u / c , if j = l , 0 1 × 2 , if j l ,
for i = 1 , 2 , , M , j = 1 , 2 , , N , k = 1 , 2 , , M and l = 1 , 2 , , N .

Appendix B. Matrices Related to Weighted Least Squares

A weighted least squares problem is an optimization problem as follows.
minimize ϕ ( h G ϕ ) T W ( h G ϕ ) ,
where G is the design matrix (of full column rank), h is the observation vector, ϕ is the parameter vector, and W is the (positive definite) weighted matrix. We refer ( h G ϕ ) as residual vector.
We introduce the weighted residual vector r W as follows.
r W = W ( h G ϕ ) .
It follows from Equation (A4) that
W 1 r W + G ϕ = h .
By the orthogonality projection principle of least squares method,
G T r W = 0 .
Combining Equation (A5) and Equation (A6), we get
M r W ϕ = h 0 ,
where
M = W 1 G G T 0 .
After finding the inverse of M by matrix inversion lemma [46], we find an interesting fact that the matrices listed in Table 2 have the same form as specific blocks of M 1 in Equation (A9).
M 1 = W [ I G ( G T W G ) 1 G T W ] W T G ( G T W G ) T ( G T W G ) 1 G T W ( G T W G ) 1 .

Appendix C. Linear Model for the First Stage of Our Algorithm

h 1 = [ h 1 , 1 T , h 1 , 2 T , , h 1 , M T ] T ,
where
h 1 , i = 2 [ t ¯ i T ( t ¯ i s ¯ 1 ) + c τ i , 1 t ¯ i s ¯ 1 ] + c 2 τ i , 1 2 2 [ t ¯ i T ( t ¯ i s ¯ 2 ) + c τ i , 2 t ¯ i s ¯ 2 ] + c 2 τ i , 2 2 2 [ t ¯ i T ( t ¯ i s ¯ N ) + c τ i , N t ¯ i s ¯ N ] + c 2 τ i , N 2 c f i , 1 t ¯ i s ¯ 1 + c 2 τ i , 1 f i , 1 c f i , 2 t ¯ i s ¯ 2 + c 2 τ i , 2 f i , 2 c f i , N t ¯ i s ¯ N + c 2 τ i , N f i , N ,
for i = 1 , 2 , , M .
G 1 = [ G 1 , 1 T , G 1 , 2 T , , G 1 , M T ] T ,
where
G 1 , i = G 1 , i ( 1 : N , 1 : M + 2 ) 0 N × ( M + 2 ) G 1 , i ( N + 1 : 2 N , 1 : M + 2 ) G 1 , i ( N + 1 : 2 N , M + 3 : 2 M + 4 ) ,
G 1 , i ( 1 : N , 1 : M + 2 ) = 2 G 1 , i ( N + 1 : 2 N , M + 3 : 2 M + 4 ) = 2 ( t ¯ i s ¯ 1 ) T 0 1 × ( i 1 ) 2 ( c τ i , 1 + t ¯ i s ¯ 1 ) 0 1 × ( M i ) 2 ( t ¯ i s ¯ N ) T 0 1 × ( i 1 ) 2 ( c τ i , N + t ¯ i s ¯ N ) 0 1 × ( M i ) ,
G 1 , i ( N + 1 : 2 N , 1 : M + 2 ) = [ 0 N × ( i + 1 ) , c f i , 0 N × ( M i ) ] ,
for i = 1 , 2 , , M .
B 1 = I M B 1 , i ,
where
B 1 , i = 2 diag ( u s ¯ 1 , , u s ¯ N ) 0 N × N diag ( ρ u , s ¯ 1 T u ˙ , , ρ u , s ¯ N T u ˙ ) diag ( u s ¯ 1 , , u s ¯ N ) .
for i = 1 , 2 , , M .
It follows from Equation (20) and Equation (22) that first-order error vector in the first stage is
ϵ 1 ( 1 ) = D z Δ z + c Δ m = [ D z , c I ] Δ z Δ m ,
where
D z = [ D t , D s ] , D t = blkdiag ( D t , 1 , D t , 2 , , D t , M ) , D s = [ D s , 1 T , D s , 2 T , , D s , M T ] T , D t , i = [ ρ t ¯ i , u ρ t ¯ i , s ¯ 1 , ρ t ¯ i , u ρ t ¯ i , s ¯ 2 , , ρ t ¯ i , u ρ t ¯ i , s ¯ N , 1 1 × N ( A u , t ¯ i u ˙ ) ] T , D s , i = blkdiag ( ρ s ¯ 1 , u + ρ t ¯ i , s ¯ 1 , ρ s ¯ 2 , u + ρ t ¯ i , s ¯ 2 , , ρ s ¯ N , u + ρ t ¯ i , s ¯ N ) T blkdiag ( A u , s ¯ 1 u ˙ , A u , s ¯ 2 u ˙ , , A u , s ¯ N u ˙ ) T ,
for i = 1 , 2 , , M .
By inspection of Equation (23) and Equation (24), the second-order error vector in the first stage ϵ 1 ( 2 ) can be expressed as
ϵ 1 ( 2 ) = [ ϵ 1 , 1 ( 2 ) T , ϵ 1 , 2 ( 2 ) T , , ϵ 1 , M ( 2 ) T ] T ,
where
ϵ 1 , i ( 2 ) = [ ( ϵ τ , i ϵ τ , i ) T , ( ϵ τ , i ϵ f , i ) T ] T ,
for i = 1 , 2 , , M .

Appendix D. Linear Model for the Second Stage of Our Algorithm

h 2 = 0 2 × 1 α 1 ^ 2 u ^ t ¯ 1 2 α 2 ^ 2 u ^ t ¯ 2 2 α M ^ 2 u ^ t ¯ M 2 0 2 × 1 α ^ 1 β ^ 1 ( u ^ t ¯ 1 ) T u ˙ ^ α ^ 2 β ^ 2 ( u ^ t ¯ 2 ) T u ˙ ^ α ^ M β ^ M ( u ^ t ¯ M ) T u ˙ ^ .
G 2 = I 2 0 2 × 2 2 [ ( t ¯ 1 u ^ ) , , ( t ¯ M u ^ ) ] T 0 M × 2 0 2 × 2 I 2 1 M × 1 ( u ˙ ^ T ) [ ( t ¯ 1 u ^ ) , , ( t ¯ M u ^ ) ] T .
B 2 = diag ( 1 , 1 , 2 α T ) 0 ( M + 2 ) × ( M + 2 ) diag ( 0 , 0 , β T ) diag ( 1 , 1 , α T ) .
Furthermore, Equation (37) and Equation (38) indicate that the second-order error vector in the second stage is
ϵ 2 ( 2 ) = 0 2 × 1 ( Δ u T Δ u ) 1 M × 1 + Δ α Δ α 0 2 × 1 ( Δ u T Δ u ˙ ) 1 M × 1 + Δ α Δ β .

Appendix E. Formulas for Computing Bias Vector of Our Estimator

Appendix E.1. Formulas for Computing Bias in the First Stage

Δ G 1 = [ Δ G 1 , 1 T , Δ G 1 , 2 T , , Δ G 1 , M T ] T ,
where for i = 1 , 2 , , M ,
Δ G 1 , i = c ( 2 E 1 , i δ e 1 , i + E 3 , i δ e 1 , i + E 2 , i δ e 2 , i ) , e 1 , i = [ 0 1 × ( i + 1 ) , 1 , 0 1 × ( 2 M + 2 i ) ] , e 2 , i = [ 0 1 × ( M + i + 3 ) , 1 , 0 1 × ( M i ) ] , E 1 , i = [ 0 2 N × 2 ( M + i N ) , [ I N , 0 N ] T , 0 2 N × [ 2 N ( M i ) + N ] ] , E 2 , i = [ 0 2 N × 2 ( M + i N ) , [ 0 N , I N ] T , 0 2 N × [ 2 N ( M i ) + N ] ] , E 3 , i = [ 0 2 N × [ 2 ( M + i N ) + N ] , [ 0 N , I N ] T , 0 2 N × 2 N ( M i ) ] .
With some well-known formulas in multivariate statistics and Equation (A15), we list the related expected values as follows, where
Q = blkdiag ( Q z , Q m ) .
  • Let K 1 { i } = K 1 ( 2 N ( i 1 ) + 1 : 2 N i , : ) for i = 1 , 2 , , M . Then
    E [ Δ G 1 T K 1 B 1 ϵ 1 ( 1 ) ] = i = 1 M E Δ G 1 , i T K 1 { i } B 1 ϵ 1 ( 1 ) = 2 c i = 1 M e 1 , i T tr ( E 1 , i T K 1 { i } B 1 [ D z , c I ] Q ) + c i = 1 M e 1 , i T tr ( E 3 , i T K 1 { i } B 1 [ D z , c I ] Q ) + c i = 1 M e 2 , i T tr ( E 2 , i T K 1 { i } B 1 [ D z , c I ] Q ) .
  • E [ Δ G 1 H 1 o B 1 ϵ 1 ( 1 ) ] = [ E [ Δ G 1 , 1 H 1 o B 1 ϵ 1 ( 1 ) ] T , , E [ Δ G 1 , M H 1 o B 1 ϵ 1 ( 1 ) ] T ] T ,
    where
    E [ Δ G 1 , i H 1 o B 1 ϵ 1 ( 1 ) ] = 2 c E 1 , i Q ( e 1 , i H 1 o B 1 [ D z , c I ] ) T + c E 3 , i Q ( e 1 , i H 1 o B 1 [ D z , c I ] ) T + c E 2 , i Q ( e 2 , i H 1 o B 1 [ D z , c I ] ) T ,
    for i = 1 , 2 , , M .

Appendix E.2. Formulas for Computing Bias in the Second Stage

Δ G 2 = 0 2 × 2 0 2 × 2 1 M × 1 ( 2 Δ u T ) 0 M × 2 0 2 × 2 0 2 × 2 1 M × 1 ( Δ u ˙ T ) 1 M × 1 ( Δ u T ) , Δ B 2 = diag ( [ 0 , 0 , 2 Δ α T ] ) 0 ( M + 2 ) × ( M + 2 ) diag ( [ 0 , 0 , Δ β T ] ) diag ( [ 0 , 0 , Δ α T ] ) .
All the expected values required for calculating μ 2 are listed as follows, where S is the mean squared error matrix of Δ ϕ 1 , i.e.,
S = P 1 o 1 + μ 1 μ 1 T .
  • E [ ( Δ α i ) 2 ] S ( i + 2 , i + 2 ) , i = 1 , 2 , , M , E [ Δ α i Δ β i ] S ( i + 2 , i + M + 4 ) , i = 1 , 2 , , M , E [ Δ u T Δ u ] tr ( S ( 1 : 2 , 1 : 2 ) ) , E [ Δ u T Δ u ˙ ] tr ( S ( 1 : 2 , M + 3 : M + 4 ) ) .
  • Δ G 2 H 2 o = 0 2 × 1 1 M × 1 ( 2 Δ u T ) H 2 o ( 1 : 2 , : ) 0 2 × 1 1 M × 1 ( Δ u ˙ T ) H 2 o ( 1 : 2 , : ) + 1 M × 1 ( Δ u T ) H 2 o ( 3 : 4 , : ) ,
    where
    E [ Δ u T H 2 o ( 1 : 2 , : ) B 2 ϵ 2 ( 1 ) ] = tr ( H 2 o ( 1 : 2 , : ) B 2 S ( : , 1 : 2 ) ) , E [ Δ u ˙ T H 2 o ( 1 : 2 , : ) B 2 ϵ 2 ( 1 ) ] = tr ( H 2 o ( 1 : 2 , : ) B 2 S ( : , M + 3 : M + 4 ) ) , E [ Δ u T H 2 o ( 3 : 4 , : ) B 2 ϵ 2 ( 1 ) ] = tr ( H 2 o ( 3 : 4 , : ) B 2 S ( : , 1 : 2 ) ) .
  • Δ B 2 B 2 1 = 0 2 × ( 2 M + 4 ) 2 Δ α 1 B 2 1 ( 3 , : ) 2 Δ α 2 B 2 1 ( 4 , : ) 2 Δ α M B 2 1 ( M + 2 , : ) 0 2 × ( 2 M + 4 ) Δ α 1 B 2 1 ( M + 5 , : ) + Δ β 1 B 2 1 ( 3 , : ) Δ α 2 B 2 1 ( M + 6 , : ) + Δ β 2 B 2 1 ( 4 , : ) Δ α M B 2 1 ( 2 M + 4 , : ) + Δ β M B 2 1 ( M + 2 , : ) ,
    where
    E [ Δ α i B 2 1 ( i + 2 , : ) U B 2 ϵ 2 ( 1 ) ] = S ( i + 2 , : ) B 2 T U T B 2 1 ( i + 2 , : ) T , E [ Δ α i B 2 1 ( i + M + 4 , : ) U B 2 ϵ 2 ( 1 ) ] = S ( i + 2 , : ) B 2 T U T B 2 1 ( i + M + 4 , : ) T , E [ Δ β i B 2 1 ( i + 2 , : ) U B 2 ϵ 2 ( 1 ) ] = S ( i + M + 4 , : ) B 2 T U T B 2 1 ( i + 2 , : ) T ,
    for i = 1 , 2 , , M .
  • Δ G 2 T K 2 = 2 Δ u i = 1 M K 2 ( i + 2 , : ) Δ u ˙ i = 1 M K 2 ( i + M + 4 , : ) Δ u i = 1 M K 2 ( i + M + 4 , : ) ,
    where
    E [ Δ u i = 1 M K 2 ( i + 2 , : ) B 2 ϵ 2 ( 1 ) ] = S ( 1 : 2 , : ) B 2 T i = 1 M K 2 ( i + 2 , : ) T , E [ ( D e l t a u ˙ i = 1 M K 2 ( i + M + 4 , : ) B 2 ϵ 2 ( 1 ) ] = S ( M + 3 : M + 4 , : ) B 2 T i = 1 M K 2 ( i + M + 4 , : ) T , E [ Δ u i = 1 M K 2 ( i + M + 4 , : ) B 2 ϵ 2 ( 1 ) ] = S ( 1 : 2 , : ) B 2 T i = 1 M K 2 ( i + M + 4 , : ) T ,
    for i = 1 , 2 , , M .
  • Δ B 2 T K 2 = 0 2 × ( 2 M + 4 ) 2 Δ α 1 K 2 ( 3 , : ) + Δ β 1 K 2 ( M + 5 , : ) 2 Δ α 2 K 2 ( 4 , : ) + Δ β 2 K 2 ( M + 6 , : ) 2 Δ α M K 2 ( M + 2 , : ) + Δ β M K 2 ( 2 M + 4 , : ) 0 2 × ( 2 M + 4 ) Δ α 1 K 2 ( M + 5 , : ) Δ α 2 K 2 ( M + 6 , : ) Δ α M K 2 ( 2 M + 4 , : ) ,
    where
    E [ Δ α i K 2 ( i + 2 , : ) B 2 ϵ 2 ( 1 ) ] = S ( i + 2 , : ) B 2 T K 2 ( i + 2 , : ) T , E [ Δ α i K 2 ( i + M + 4 , : ) B 2 ϵ 2 ( 1 ) ] = S ( i + 2 , : ) B 2 T K 2 ( i + M + 4 , : ) T , E [ Δ β i K 2 ( i + M + 4 , : ) B 2 ϵ 2 ( 1 ) ] = S ( i + M + 4 , : ) B 2 T K 2 ( i + M + 4 , : ) T ,
    for i = 1 , 2 , , M .

Appendix F. Some Formula for Proving Proposition 2

By Equation (73),
G 3 = [ G 3 , 1 T , G 3 , 2 T , , G 3 , M T ] T ,
where
G 3 , i = B 1 , i 1 G 1 , i o · B 2 1 G 2 o .
By performing algebraic manipulations, we have
B 1 , i 1 G 1 , i o = Y i 0 N × ( M + 2 ) Z i Y i , B 2 1 G 2 o = I 2 0 2 × 2 Y 0 M × 2 0 2 × 2 I 2 Z Y ,
where
Y i = ( t ¯ i s ¯ 1 ) T u s ¯ 1 0 1 × ( i 1 ) c τ i , 1 + t ¯ i s ¯ 1 u s ¯ 1 0 1 × ( M i ) ( t ¯ i s ¯ N ) T u s ¯ N 0 1 × ( i 1 ) c τ i , 1 + t ¯ i s ¯ N u s ¯ N 0 1 × ( M i ) ,
Z i = ( t ¯ i s ¯ 1 ) T ρ u , s ¯ 1 T u ˙ u s ¯ 1 2 0 1 × ( i 1 ) ( c τ i , 1 + t ¯ i s ¯ 1 ) ρ u , s ¯ 1 T u ˙ u s ¯ 1 2 + c f i , 1 u s ¯ 1 0 1 × ( M i ) ( t ¯ i s ¯ N ) T ρ u , s ¯ N T u ˙ u s ¯ N 2 0 1 × ( i 1 ) ( c τ i , 1 + t ¯ i s ¯ N ) ρ u , s ¯ N T u ˙ u s ¯ N 2 + c f i , 1 u s ¯ N 0 1 × ( M i ) ,
Y = ρ u , t ¯ 1 T ρ u , t ¯ 2 T ρ u , t ¯ M T , Z = u ˙ T A u , t ¯ 1 T u ˙ T A u , t ¯ 2 T u ˙ T A u , t ¯ M T .
Expanding Equation (A24) with Equation (A25), the details of G 3 , i ( i = 1 , 2 , , M ) is as follows, where j = 1 , 2 , , N .
G 3 , i ( j , 1 : 2 ) = 1 u s ¯ j ( c τ i , j + t ¯ i s ¯ j ) ρ u , t ¯ i + t ¯ i s ¯ j T ,
G 3 , i ( N + j , 1 : 2 ) = ( ρ u , s ¯ j T u ˙ ) ρ u , t ¯ i + ρ u , s ¯ j u s ¯ j c f i , j ρ u , t ¯ i u s ¯ j c τ i , j + t ¯ i s ¯ j u s ¯ j A u , t ¯ i u ˙ T ,
G 3 , i ( j , 3 : 4 ) = 0 1 × 2 ,
G 3 , i ( N + j , 3 : 4 ) = G 3 , i ( j , 1 : 2 ) .
When the small error conditions in Conditions 1 through 6 are satisfied, by Equation (19) and Equation (21), we have
c τ i , j + t ¯ i s ¯ j u t ¯ i + u s ¯ j ,
c f i , j ρ u , t ¯ i T u ˙ + ρ u , s ¯ j T u ˙ .
We also note that
A u , t ¯ i u ˙ = β i α i 2 ( t ¯ i u ) + 1 α i u ˙ ,
A x , y = 1 x y I ρ x , y ρ x , y T .
Combining the above formulas,
G 3 , i ( N + j , 3 : 4 ) = G 3 , i ( j , 1 : 2 ) ( ρ u , t ¯ i + ρ u , s ¯ j ) ,
G 3 , i ( N + j , 1 : 2 ) ( A u , t ¯ i + A u , s ¯ j ) u ˙ .

References

  1. Zhang, Y.; Ho, K.C. Multistatic localization in the absence of transmitter position. IEEE Trans. Signal Process. 2019, 67, 4745–4760. [Google Scholar] [CrossRef]
  2. He, C.; Wang, Y.; Yu, W.; Song, L. Underwater target localization and synchronization for a distributed SIMO sonar with an isogradient SSP and uncertainties in receiver locations. Sensors 2019, 19, 1976. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Liang, J.; Chen, Y.; So, H.; Jing, Y. Circular/hyperbolic/elliptic localization via Euclidean norm elimination. Signal Process. 2018, 148, 102–113. [Google Scholar] [CrossRef]
  4. Peters, D.J. A Bayesian method for localization by multistatic active sonar. IEEE J. Ocean. Eng. 2017, 42, 135–142. [Google Scholar] [CrossRef]
  5. Yang, L.; Yang, L.; Ho, K.C. Moving target localization in multistatic sonar by differential delays and Doppler shifts. IEEE Signal Process. Lett. 2016, 23, 1160–1164. [Google Scholar] [CrossRef]
  6. Rui, L.; Ho, K.C. Efficient closed-form estimators for multistatic sonar localization. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 600–614. [Google Scholar] [CrossRef]
  7. Rui, L.; Ho, K.C. Elliptic localization: Performance study and optimum receiver placement. IEEE Trans. Signal Process. 2014, 62, 4673–4688. [Google Scholar] [CrossRef]
  8. Ehlers, F.; Ricci, G.; Orlando, D. Batch tracking algorithm for multistatic sonars. IET Radar Sonar Navig. 2012, 6, 746–752. [Google Scholar] [CrossRef]
  9. Daun, M.; Ehlers, F. Tracking algorithms for multistatic sonar systems. EURASIP J. Adv. Signal Process. 2010, 2010, 461538. [Google Scholar] [CrossRef] [Green Version]
  10. Simakov, S. Localization in airborne multistatic sonars. IEEE J. Ocean. Eng. 2008, 33, 278–288. [Google Scholar] [CrossRef]
  11. Coraluppi, S. Multistatic sonar localization. IEEE J. Ocean. Eng. 2006, 31, 964–974. [Google Scholar] [CrossRef]
  12. Coraluppi, S.; Carthel, C. Distributed tracking in multistatic sonar. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 1138–1147. [Google Scholar] [CrossRef]
  13. Sandys-Wunsch, M.; Hazen, M.G. Multistatic localization error due to receiver positioning errors. IEEE J. Ocean. Eng. 2002, 27, 328–334. [Google Scholar] [CrossRef]
  14. Shin, H.; Chung, W. Target localization using double-sided bistatic range measurements in distributed MIMO radar systems. Sensors 2019, 19, 2524. [Google Scholar] [CrossRef] [Green Version]
  15. Amiri, R.; Behnia, F.; Noroozi, A. Efficient algebraic solution for elliptic target localisation and antenna position refinement in multiple-input–multiple-output radars. IET Radar Sonar Navig. 2019, 13, 2046–2054. [Google Scholar] [CrossRef]
  16. Amiri, R.; Behnia, F.; Noroozi, A. Efficient joint moving target and antenna localization in distributed MIMO radars. IEEE Trans. Wirel. Commun. 2019, 18, 4425–4435. [Google Scholar] [CrossRef]
  17. Amiri, R.; Behnia, F.; Zamani, H. Asymptotically efficient target localization from bistatic range measurements in distributed MIMO radars. IEEE Signal Process. Lett. 2017, 24, 299–303. [Google Scholar] [CrossRef]
  18. Einemo, M.; So, H.C. Weighted least squares algorithm for target localization in distributed MIMO radar. Signal Process. 2015, 115, 144–150. [Google Scholar] [CrossRef]
  19. Dianat, M.; Taban, M.R.; Dianat, J.; Sedighi, V. Target localization using least squares estimation for MIMO radars with widely separated antennas. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 2730–2741. [Google Scholar] [CrossRef]
  20. Godrich, H.; Haimovich, A.M.; Blum, R.S. Target localization accuracy gain in MIMO radar-based systems. IEEE Trans. Inf. Theory 2010, 56, 2783–2803. [Google Scholar] [CrossRef] [Green Version]
  21. Zhao, Y.; Hu, D.; Zhao, Y.; Liu, Z.; Zhao, C. Refining inaccurate transmitter and receiver positions using calibration targets for target localization in multi-static passive radar. Sensors 2019, 19, 3365. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Wang, J.; Qin, Z.; Wei, S.; Sun, Z.; Xiang, H. Effects of nuisance variables selection on target localisation accuracy in multistatic passive radar. Electron. Lett. 2018, 54, 1139–1141. [Google Scholar] [CrossRef]
  23. Chalise, B.K.; Zhang, Y.D.; Amin, M.G.; Himed, B. Target localization in a multi-static passive radar system through convex optimization. Signal Process. 2014, 102, 207–215. [Google Scholar] [CrossRef]
  24. Gorji, A.A.; Tharmarasa, R.; Kirubarajan, T. Widely separated MIMO versus multistatic radars for target localization and tracking. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 2179–2194. [Google Scholar] [CrossRef]
  25. Malanowski, M.; Kulpa, K. Two methods for target localization in multistatic passive radar. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 572–580. [Google Scholar] [CrossRef]
  26. Yin, Z.; Jiang, X.; Yang, Z.; Zhao, N.; Chen, Y. WUB-IP: A high-precision UWB positioning scheme for ndoor multiuser applications. IEEE Syst. J. 2019, 13, 279–288. [Google Scholar] [CrossRef] [Green Version]
  27. Zhou, Y.; Law, C.L.; Guan, Y.L.; Chin, F. Indoor elliptical localization based on asynchronous UWB range measurement. IEEE Trans. Instrum. Meas. 2011, 60, 248–257. [Google Scholar] [CrossRef]
  28. Smith, J.O.; Abel, J.S. Closed-form least-squares source location estimation from range-difference measurements. IEEE Trans. Acoust. Speech Signal Process. 1987, 35, 1661–1669. [Google Scholar] [CrossRef]
  29. Schau, H.; Robinson, A. Passive source localization employing intersecting spherical surfaces from time-of-arrival differences. IEEE Trans. Acoust. Speech Signal Process. 1987, 35, 1223–1225. [Google Scholar] [CrossRef]
  30. Amiri, R.; Behnia, F.; Sadr, M.A.M. Exact solution for elliptic localization in distributed MIMO radar systems. IEEE Trans. Veh. Technol. 2018, 67, 1075–1086. [Google Scholar] [CrossRef]
  31. Chan, Y.T.; Ho, K.C. A simple and efficient estimator for hyperbolic location. IEEE Trans. Signal Process. 1994, 42, 1905–1915. [Google Scholar] [CrossRef] [Green Version]
  32. Zheng, B.; Yang, Z. Perturbation analysis for mixed least squares–total least squares problems. Numer. Linear Algebra Appl. 2019, 26, e2239. [Google Scholar] [CrossRef]
  33. Buranay, S.C.; Iyikal, O.C. A predictor-corrector iterative method for solving linear least squares problems and perturbation error analysis. J. Inequal. Appl. 2019, 2019, 203. [Google Scholar] [CrossRef] [Green Version]
  34. Xie, P.; Xiang, H.; Wei, Y. A contribution to perturbation analysis for total least squares problems. Numer. Algorithms 2017, 75, 381–395. [Google Scholar] [CrossRef]
  35. Harville, D.A. Linear Models and the Relevant Distributions and Matrix Algebra; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  36. Bar, S.; Tabrikian, J. The risk-unbiased Cramér–Rao bound for non-Bayesian multivariate parameter estimation. IEEE Trans. Signal Process. 2018, 66, 4920–4934. [Google Scholar] [CrossRef]
  37. Bergel, I.; Noam, Y. Lower bound on the localization error in infinite networks with random sensor locations. IEEE Trans. Signal Process. 2018, 66, 1228–1241. [Google Scholar] [CrossRef] [Green Version]
  38. Messer, H. The hybrid Cramér–Rao lower bound—From practice to theory. In Proceedings of the 4th IEEE Sensor Array and Multichannel Signal Processing Workshop, Waltham, MA, USA, 12–14 July 2006; pp. 304–307. [Google Scholar] [CrossRef]
  39. Noam, Y.; Messer, H. The hybrid Cramér–Rao bound and the generalized Gaussian linear estimation problem. In Proceedings of the 5th IEEE Sensor Array and Multichannel Signal Processing Workshop, Darmstadt, Germany, 21–23 July 2008; pp. 395–399. [Google Scholar] [CrossRef]
  40. Van Trees, H.L.; Bell, K.L.; Tian, Z. Detection, Estimation, and Modulation Theory Part I: Detection, Estimation, and Filtering Theory; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  41. Rockah, Y.; Schultheiss, P. Array shape calibration using sources in unknown locations—Part I: far-field sources. IEEE Trans. Acoust. Speech Signal Process. 1987, 35, 286–299. [Google Scholar] [CrossRef] [Green Version]
  42. Magnus, J.R.; Neudecker, H. Matrix Differential Calculus with Applications in Statistics and Econometrics; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
  43. Chui, C.K.; Chen, G. Kalman Filtering with Real-Time Applications; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  44. Li, X.; Wang, S.; Cai, Y. Tutorial: Complexity analysis of Singular Value Decomposition and its variants. arXiv 2019, arXiv:1906.12085. [Google Scholar]
  45. Foy, W.H. Position-location solutions by Taylor-series estimation. IEEE Trans. Aerosp. Electron. Syst. 1976, 12, 187–194. [Google Scholar] [CrossRef]
  46. Horn, R.A.; Johnson, C.R. Matrix Analysis, 2nd ed.; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
Figure 1. Flowchart of the proposed estimator. Algorithm 1 is called in the flowchart.
Figure 1. Flowchart of the proposed estimator. Algorithm 1 is called in the flowchart.
Mathematics 08 00129 g001
Figure 2. Nominal location geometry for computer simulations.
Figure 2. Nominal location geometry for computer simulations.
Mathematics 08 00129 g002
Figure 3. RMSE and HCRB for position estimator.
Figure 3. RMSE and HCRB for position estimator.
Mathematics 08 00129 g003
Figure 4. RMSE and HCRB for velocity estimator.
Figure 4. RMSE and HCRB for velocity estimator.
Mathematics 08 00129 g004
Figure 5. Surface plot of norm of the approximate bias of u ^ .
Figure 5. Surface plot of norm of the approximate bias of u ^ .
Mathematics 08 00129 g005
Figure 6. Surface plot of norm of the approximate bias of u ˙ ^ .
Figure 6. Surface plot of norm of the approximate bias of u ˙ ^ .
Mathematics 08 00129 g006
Figure 7. Normalized running time for locating multiple disjoint targets.
Figure 7. Normalized running time for locating multiple disjoint targets.
Mathematics 08 00129 g007
Figure 8. Normalized running time for locating multiple disjoint targets.
Figure 8. Normalized running time for locating multiple disjoint targets.
Mathematics 08 00129 g008
Figure 9. Normalized running time for locating multiple disjoint targets.
Figure 9. Normalized running time for locating multiple disjoint targets.
Mathematics 08 00129 g009
Table 1. List of symbols and notations (□ as a placeholder).
Table 1. List of symbols and notations (□ as a placeholder).
Symbols/NotationsRemarks
o zero-order approximation of □
¯ expected or nominal value of □
^ estimator of □
Δ random error or differential of □
Q covariance of □
Mknown number of transmitters
Nknown number of receivers
t i actual unobservable position of the i-th transmitter, i = 1 , , M
s j actual unobservable position of the j-th receiver, j = 1 , , N
t [ t 1 T , t 2 T , , t M T ] T
s [ s 1 T , s 2 T , , s N T ] T
z [ t T , s T ] T
z ¯ known nominal value of z
cknown signal speed
u unknown position of the target
u ˙ unknown velocity of the target
θ [ u T , u ˙ T ] T
τ i , j observed differential delay time between t i and s j
f i , j observed range rate between t i and s j
τ i [ τ i , 1 , τ i , 2 , , τ i , N ] T
f i [ f i , 1 , f i , 2 , , f i , N ] T
m i [ τ i T , f i T ] T
m [ m 1 T , m 2 T , , m M T ] T
m ¯ expected value of m
Δ z z z ¯ , position uncertainties of transmitters and receivers
Δ m m m ¯ , observation errors
ρ x , y ( x y ) / x y , gradient of x y with respected to x
A x , y I / x y ( x y ) ( x y ) T / x y 3 , Hessian of x y with respected to x
Table 2. List of matrix symbols.
Table 2. List of matrix symbols.
Matrix NotationsExpressions
P 1 G 1 T W 1 G 1
P 2 G 2 T W 2 G 2
H 1 P 1 1 G 1 T W 1
H 2 P 2 1 G 2 T W 2
K 1 W 1 ( I G 1 o H 1 o )
K 2 W 2 ( I G 2 o H 2 o )
U W 2 1 K 2
V H 2 o W 2 1
Table 3. Size of matrices.
Table 3. Size of matrices.
Matrix NotationsMatrix Sizes
G 1 ( 2 M N ) × ( 2 M + 4 )
ϕ 1 ( 2 M + 4 ) × 1
h 1 ( 2 M N ) × 1
B 1 ( 2 M N ) × ( 2 M N )
W 1 ( 2 M N ) × ( 2 M N )
G 1 , i ( 2 N ) × ( 2 M + 4 )
B 1 , i ( 2 N ) × ( 2 N )
G 2 ( 2 M + 4 ) × 4
ϕ 2 4 × 1
h 2 ( 2 M + 4 ) × 1
B 2 ( 2 M + 4 ) × ( 2 M + 4 )
W 2 ( 2 M + 4 ) × ( 2 M + 4 )
Q m ( 2 M N ) × ( 2 M N )
Q z ( 2 M + 2 N ) × ( 2 M + 2 N )
D z ( 2 M N ) × ( 2 M + 2 N )
Table 4. Monte Carlo simulation settings.
Table 4. Monte Carlo simulation settings.
QuantitiesValues
M3
N5
c1500 m/s
t 1 [ 1500 , 1500 ] T m
t 2 [ 900 , 4000 ] T m
t 3 [ 3000 , 4000 ] T m
s 1 [ 1000 , 3000 ] T m
s 2 [ 2500 , 500 ] T m
s 3 [ 3000 , 1000 ] T m
s 4 [ 2000 , 4000 ] T m
s 5 [ 2000 , 2000 ] T m
R 0 . 5 1 N × N + 0 . 5 I N
σ τ 0 . 02 : 0 . 02 : 0 . 2 s
σ f σ τ / 10
σ z 20 : 20 : 200 m
u [ 0 , 2000 ] T m
u ˙ [ 20 , 10 ] T m

Share and Cite

MDPI and ACS Style

Wang, X.; Yu, Z.; Yang, L.; Li, J. Design and Analysis of a Non-Iterative Estimator for Target Location in Multistatic Sonar Systems with Sensor Position Uncertainties. Mathematics 2020, 8, 129. https://0-doi-org.brum.beds.ac.uk/10.3390/math8010129

AMA Style

Wang X, Yu Z, Yang L, Li J. Design and Analysis of a Non-Iterative Estimator for Target Location in Multistatic Sonar Systems with Sensor Position Uncertainties. Mathematics. 2020; 8(1):129. https://0-doi-org.brum.beds.ac.uk/10.3390/math8010129

Chicago/Turabian Style

Wang, Xin, Zhi Yu, Le Yang, and Ji Li. 2020. "Design and Analysis of a Non-Iterative Estimator for Target Location in Multistatic Sonar Systems with Sensor Position Uncertainties" Mathematics 8, no. 1: 129. https://0-doi-org.brum.beds.ac.uk/10.3390/math8010129

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop